首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
大数据具有高速变化特性,其内容与分布特征均处于动态变化之中,目前的前馈神经网络模型是一种静态学习模型,不支持增量式更新,难以实时学习动态变化的大数据特征。针对这个问题,提出一种支持增量式更新的大数据特征学习模型。通过设计一个优化目标函数对参数进行快速增量式更新,为了在更新过程中保持网络的原始知识,最小化平方误差函数。对于特征变化频繁的数据,通过增加隐藏层神经元数目网络对结构进行更新,使得更新后的网络能够实时学习动态变化大数据的特征。在对网络参数与结构更新之后,通过权重矩阵SVD分解对更新后的网络结构进行优化,删除冗余的网络连接,增强网络模型的泛化能力。实验结果表明提出的模型能够在尽可能保持网络模型原始知识的基础上,通过不断更新神经网络的参数与结构实时学习动态大数据的特征。  相似文献   

2.
Incremental learning of neural networks has attracted much interest in recent years due to its wide applicability to large scale data sets and to distributed learning scenarios. Moreover, nonstationary learning paradigms have also emerged as a subarea of study in Machine Learning literature due to the problems of classical methods when dealing with data set shifts. In this paper we present an algorithm to train single layer neural networks with nonlinear output functions that take into account incremental, nonstationary and distributed learning scenarios. Moreover, it is demonstrated that introducing a regularization term into the proposed model is equivalent to choosing a particular initialization for the devised training algorithm, which may be suitable for real time systems that have to work under noisy conditions. In addition, the algorithm includes some previous models as special cases and can be used as a block component to build more complex models such as multilayer perceptrons, extending the capacity of these models to incremental, nonstationary and distributed learning paradigms. In this paper, the proposed algorithm is tested with standard data sets and compared with previous approaches, demonstrating its higher accuracy.  相似文献   

3.
构造型神经网络双交叉覆盖增量学习算法   总被引:12,自引:1,他引:12  
陶品  张钹  叶榛 《软件学报》2003,14(2):194-201
研究了基于覆盖的构造型神经网络(cover based constructive neural networks,简称CBCNN)中的双交叉覆盖增量学习算法(BiCovering algorithm,简称BiCA).根据CBCNN的基本思想,该算法进一步通过构造多个正反覆盖簇,使得网络在首次构造完成后还可以不断地修改与优化神经网络的参数与结构,增加或删除网络中的节点,进行增量学习.通过分析认为,BiCA学习算法不但保留了CBCNN网络的优点与特点,而且实现了增量学习并提高了CBCNN网络的泛化能力.仿真实验结果显示,该增量学习算法在神经网络初始分类能力较差的情况下具有快速学习能力,并且对样本的学习顺序不敏感.  相似文献   

4.
A scalable, incremental learning algorithm for classification problems   总被引:5,自引:0,他引:5  
In this paper a novel data mining algorithm, Clustering and Classification Algorithm-Supervised (CCA-S), is introduced. CCA-S enables the scalable, incremental learning of a non-hierarchical cluster structure from training data. This cluster structure serves as a function to map the attribute values of new data to the target class of these data, that is, classify new data. CCA-S utilizes both the distance and the target class of training data points to derive the cluster structure. In this paper, we first present problems with many existing data mining algorithms for classification problems, such as decision trees, artificial neural networks, in scalable and incremental learning. We then describe CCA-S and discuss its advantages in scalable, incremental learning. The testing results of applying CCA-S to several common data sets for classification problems are presented. The testing results show that the classification performance of CCA-S is comparable to the other data mining algorithms such as decision trees, artificial neural networks and discriminant analysis.  相似文献   

5.
随着社会网络的快速发展和普及,如何保护社会网络中的敏感信息已成为当前数据隐私保护研究领域的热点问题.对此,近年来出现了多种社会网络匿名化技术. 现有的匿名技术大多把社会网络抽象成简单图,然而实际生活中存在大量增量变化的社会网络,例如email通信网络,简单图并不能很好地刻画这种增量变化,因此,将社会网络抽象成增量序列具有现实意义.同时,在实际生活中大部分网络是带有权重信息的,即很多社会网络以加权图的形式出现,加权图与简单图相比携带了更多社会网络中的信息,也会带来更多的隐私泄露. 将增量的动态社会网络抽象成一个加权图的增量序列. 为了匿名加权图增量序列,提出了加权图增量序列k-匿名隐私保护模型,并设计了基于权重链表的baseline匿名算法WLKA和基于超图的匿名算法HVKA来防止基于结点标签和权重链表的攻击. 最后,通过在真实数据集上的大量测试,证明了WLKA算法能够保证加权图增量序列隐私保护的有效性,HVKA算法则在WLKA的基础上更好地保留了原图的结构性质并提高了权重信息的可用性,同时还降低了匿名过程的时间代价.  相似文献   

6.
Recurrent neural networks and robust time series prediction   总被引:22,自引:0,他引:22  
We propose a robust learning algorithm and apply it to recurrent neural networks. This algorithm is based on filtering outliers from the data and then estimating parameters from the filtered data. The filtering removes outliers from both the target function and the inputs of the neural network. The filtering is soft in that some outliers are neither completely rejected nor accepted. To show the need for robust recurrent networks, we compare the predictive ability of least squares estimated recurrent networks on synthetic data and on the Puget Power Electric Demand time series. These investigations result in a class of recurrent neural networks, NARMA(p,q), which show advantages over feedforward neural networks for time series with a moving average component. Conventional least squares methods of fitting NARMA(p,q) neural network models are shown to suffer a lack of robustness towards outliers. This sensitivity to outliers is demonstrated on both the synthetic and real data sets. Filtering the Puget Power Electric Demand time series is shown to automatically remove the outliers due to holidays. Neural networks trained on filtered data are then shown to give better predictions than neural networks trained on unfiltered time series.  相似文献   

7.
增量式IHMCAP算法的研究及其应用   总被引:5,自引:0,他引:5  
增量式IHMCAP算法采用适用于混合型学习的FTART神经网络,成功解决了符号学习与神经网络学习精度之间的均衡性问题。该算法还具有较强的增量学习能力,在给系统增加新的示例时,不用重新生成已有判定树和神经网络,只需进行一遍增量学习即可调整原结构以提高学习精度,效率高,速度快。  相似文献   

8.
Artificial neural networks (ANN) have a wide ranging usage area in the data classification problems. Backpropagation algorithm is classical technique used in the training of the artificial neural networks. Since this algorithm has many disadvantages, the training of the neural networks has been implemented with the binary and real-coded genetic algorithms. These algorithms can be used for the solutions of the classification problems. The real-coded genetic algorithm has been compared with other training methods in the few works. It is known that the comparison of the approaches is as important as proposing a new classification approach. For this reason, in this study, a large-scale comparison of performances of the neural network training methods is examined on the data classification datasets. The experimental comparison contains different real classification data taken from the literature and a simulation study. A comparative analysis on the real data sets and simulation data shows that the real-coded genetic algorithm may offer efficient alternative to traditional training methods for the classification problem.  相似文献   

9.
The multiscale classifier   总被引:2,自引:0,他引:2  
Proposes a rule-based inductive learning algorithm called multiscale classification (MSC). It can be applied to any N-dimensional real or binary classification problem to classify the training data by successively splitting the feature space in half. The algorithm has several significant differences from existing rule-based approaches: learning is incremental, the tree is non-binary, and backtracking of decisions is possible to some extent. The paper first provides background on current machine learning techniques and outlines some of their strengths and weaknesses. It then describes the MSC algorithm and compares it to other inductive learning algorithms with particular reference to ID3, C4.5, and back-propagation neural networks. Its performance on a number of standard benchmark problems is then discussed and related to standard learning issues such as generalization, representational power, and over-specialization  相似文献   

10.
《Applied Soft Computing》2007,7(3):957-967
In this study, CPBUM neural networks with annealing robust learning algorithm (ARLA) are proposed to improve the problems of conventional neural networks for modeling with outliers and noise. In general, the obtained training data in the real applications maybe contain the outliers and noise. Although the CPBUM neural networks have fast convergent speed, these are difficult to deal with outliers and noise. Hence, the robust property must be enhanced for the CPBUM neural networks. Additionally, the ARLA can be overcome the problems of initialization and cut-off points in the traditional robust learning algorithm and deal with the model with outliers and noise. In this study, the ARLA is used as the learning algorithm to adjust the weights of the CPBUM neural networks. It tunes out that the CPBUM neural networks with the ARLA have fast convergent speed and robust against outliers and noise than the conventional neural networks with robust mechanism. Simulation results are provided to show the validity and applicability of the proposed neural networks.  相似文献   

11.
文中提出了一种具有抗噪音能力的增量式混合学习算法IHMCAP,该算法将基于概率论的符号学习与神经网络学习相结合,通过引入FTART神经网络,不仅实现了两种不同思维层次的靠近,还成功地解决了符号学习与神经网络学习精度之间的均衡性问题。其独特的增理学习机制不仅使得它只需进行一遍增量学习即可完成对新增示例的学习,还使该算法具有较好的抗噪音能力,从而可以应用于实时在线学习任务。  相似文献   

12.
复系数FIR数字滤波器的神经网络设计方法   总被引:1,自引:0,他引:1  
李季檩  吕宝粮 《计算机仿真》2008,25(2):175-177,189
神经网络是一种设计实系数FIR滤波器的有效算法,为了将该方法扩展到复数域,建立统一的基于神经网络的滤波器设计框架,文中提出了一种用多层神经网络设计任意幅频响应的复系数FIR数字滤波器的新算法,主要思想是将设计问题转化为实系数多层神经网络的训练问题,在实数域对幅频响应的平方误差函数的实部和虚部分别进行最小化,误差将收敛到全局最小点.实验结果表明,利用该算法设计的滤波器具有较小的幅频响应误差和群延迟误差.该算法能解决具有任意幅频响应和群延迟要求的问题,是一种有效的设计算法.  相似文献   

13.
代伟  李德鹏  杨春雨  马小平 《自动化学报》2021,47(10):2427-2437
随机配置网络(Stochastic configuration networks, SCNs)在增量构建过程引入监督机制来分配隐含层参数以确保其无限逼近特性, 具有易于实现、收敛速度快、泛化性能好等优点. 然而, 随着数据量的不断扩大, SCNs的建模任务面临一定的挑战性. 为了提高神经网络算法在大数据建模中的综合性能, 本文提出了一种混合并行随机配置网络(Hybrid parallel stochastic configuration networks, HPSCNs)架构, 即: 模型与数据混合并行的增量学习方法. 所提方法由不同构建方式的左右两个SCNs模型组成, 以快速准确地确定最佳隐含层节点, 其中左侧采用点增量网络(PSCN), 右侧采用块增量网络(BSCN); 同时每个模型建立样本数据的动态分块方法, 从而加快候选“节点池”的建立、降低计算量. 所提方法首先通过大规模基准数据集进行了对比实验, 然后应用在一个实际工业案例上, 表明其有效性.  相似文献   

14.
一种实用的发酵过程建模方法   总被引:2,自引:0,他引:2  
该文提出了同时利用在线参数和离线参数融合建模的实用新方法,给出了基于自适应模糊神经网络方法和模糊逻辑推理方法的建模过程,将两种建模方法进行最优加权融合,采用真实青霉素发酵过程数据进行模型验证,仿真结果表明了该方法具有较好的建模精度和实用性。  相似文献   

15.
Artificial neural networks (ANNs) involve a large amount of internode communications. To reduce the communication cost as well as the time of learning process in ANNs, we earlier proposed (1995) an incremental internode communication method. In the incremental communication method, instead of communicating the full magnitude of the output value of a node, only the increment or decrement to its previous value is sent to a communication link. In this paper, the effects of the limited precision incremental communication method on the convergence behavior and performance of multilayer neural networks are investigated. The nonlinear aspects of representing the incremental values with reduced (limited) precision for the commonly used error backpropagation training algorithm are analyzed. It is shown that the nonlinear effect of small perturbations in the input(s)/output of a node does not cause instability. The analysis is supported by simulation studies of two problems. The simulation results demonstrate that the limited precision errors are bounded and do not seriously affect the convergence of multilayer neural networks.  相似文献   

16.
Central Force Optimization (CFO) is a novel and upcoming metaheuristic technique that is based upon physical kinematics. It has previously been demonstrated that CFO is effective when compared with other metaheuristic techniques when applied to multiple benchmark problems and some real world applications. This work applies the CFO algorithm to training neural networks for data classification. As a proof of concept, the CFO algorithm is first applied to train a basic neural network that represents the logical XOR function. This work is then extended to train two different neural networks in order to properly classify members of the Iris data set. These results are compared and contrasted to results gathered using Particle Swarm Optimization (PSO) in the same applications. Similarities and differences between CFO and PSO are also explored in the areas of algorithm design, computational complexity, and natural basis. The paper concludes that CFO is a novel and promising meta-heuristic that is competitive with if not superior to the PSO algorithm, and there is much room to further improve it.  相似文献   

17.
An algorithm of incremental approximation of functions in a normed linearspace by feedforward neural networks is presented. The concept of variationof a function with respect to a set is used to estimate the approximationerror together with the weight decay method, for optimizing the size andweights of a network in each iteration step of the algorithm. Two alternatives, recursively incremental and generally incremental, are proposed. In the generally incremental case, the algorithm optimizes parameters of all units in the hidden layer at each step. In the recursively incremental case, the algorithm optimizes the parameterscorresponding to only one unit in the hidden layer at each step. In thiscase, an optimization problem with a smaller number of parameters is beingsolved at each step.  相似文献   

18.
A number of soft computing approaches such as neural networks, evolutionary algorithms, and fuzzy logic have been widely used for classifier agents to adaptively evolve solutions on classification problems. However, most work in the literature focuses on the learning ability of the individual classifier agent. This article explores incremental, collaborative learning in a multiagent environment. We use the genetic algorithm (GA) and incremental GA (IGA) as the main techniques to evolve the rule set for classification and apply new class acquisition as a typical example to illustrate the incremental, collaborative learning capability of classifier agents. Benchmark data sets are used to evaluate proposed approaches. The results show that GA and IGA can be used successfully for collaborative learning among classifier agents. © 2003 Wiley Periodicals, Inc.  相似文献   

19.
在已有的神经网络逼近研究中,目标函数通常定义在有限区间(或紧集)上.而实际问题中,目标函数往往是定义在全实轴(或无界集)上.文中针对此问题,研究了全实轴上的连续函数的插值神经网络逼近问题.首先,利用构造性方法证明了神经网络逼近的稠密性定理,即可逼近性.其次,以函数的连续模为度最尺度,估计了插值神经网络逼近目标函数的速度.最后,利用数值算例进行仿真实验.文中的工作扩展了神经网络逼近的研究内容,给出了全实轴上连续函数的神经网络逼近的构造性算法,并揭示了网络逼近速度与网络拓扑结构之间的关系.  相似文献   

20.
对动态在线社交网络中的影响力最大化问题进行研究,提出一种基于跳步的增量式算法,快速跟踪动态网络最具有影响力的用户集。为应对网络结构变化,基于跳步,一方面评估变化用户影响力上限值,快速识别和保留无需变动的影响力用户;另一方面增量式地计算有潜力用户的实际影响力,替换不再属于最具影响力的用户。在真实数据集上进行实验和分析,其结果表明,相比其它最新同类算法,所提算法能以更快速度在动态网络中维护最具影响力用户集。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号