首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 562 毫秒
1.
This paper proposes a new error function at hidden layers to speed up the training of multilayer perceptrons (MLPs). With this new hidden error function, the layer-by-layer (LBL) algorithm approximately converges to the error backpropagation algorithm with optimum learning rates. Especially, the optimum learning rate for a hidden weight vector appears approximately as a multiplication of two optimum factors, one for minimizing the new hidden error function and the other for assigning hidden targets. Effectiveness of the proposed error function was demonstrated for handwritten digit recognition and isolated-word recognition tasks. Very fast learning convergence was obtained for MLPs without the stalling problem experienced in conventional LBL algorithms.  相似文献   

2.
为解决过程神经网络的隐层结构和训练速度问题,在极限学习机的基础上,提出一种混合优化的结构自适应极限过程神经网络.首先,采用在隐层中逐次增加过程神经元节点直至满足输出误差的方式完成模型结构自适应;然后,为消除冗余节点,提出对新增临时节点输出实施Gram-Schmidt正交化完成相关性判别;最后,构建一种量子衍生布谷鸟算法,对新增节点输入权函数正交基展开系数实施寻优.仿真实验以Mackey-Glass和页岩油TOC预测为例,通过对比分析验证所提出方法的有效性,仿真结果表明所得模型的逼近效率和训练速度有明显提高.  相似文献   

3.
We consider the generalization error of concept learning when using a fixed Boolean function of the outputs of a number of different classifiers. Here, we take into account the ‘margins’ of each of the constituent classifiers. A special case is that in which the constituent classifiers are linear threshold functions (or perceptrons) and the fixed Boolean function is the majority function. This corresponds to a ‘committee of perceptrons,’ an artificial neural network (or circuit) consisting of a single layer of perceptrons (or linear threshold units) in which the output of the network is defined to be the majority output of the perceptrons. Recent work of Auer et al. studied the computational properties of such networks (where they were called ‘parallel perceptrons’), proposed an incremental learning algorithm for them, and demonstrated empirically that the learning rule is effective. As a corollary of the results presented here, generalization error bounds are derived for this special case that provide further motivation for the use of this learning rule.  相似文献   

4.
The error backpropagation (EBP) training of a multilayer perceptron (MLP) may require a very large number of training epochs. Although the training time can usually be reduced considerably by adopting an on-line training paradigm, it can still be excessive when large networks have to be trained on lots of data. In this paper, a new on-line training algorithm is presented. It is called equalized EBP (EEBP), and it offers improved accuracy, speed, and robustness against badly scaled inputs. A major characteristic of EEBP is its utilization of weight specific learning rates whose relative magnitudes are derived from a priori computable properties of the network and the training data.  相似文献   

5.
一种动态调整学习速率的BP算法   总被引:2,自引:0,他引:2  
王玲芝  王忠民 《计算机应用》2009,29(7):1894-1896
在基本反向传播(BP)算法中,学习速率往往固定不变,限制了网络的收敛速度和稳定性。因此,提出一种动态调整BP网络学习速率的算法,以BP网络输出层节点的实际输出值与期望输出值的平均绝对值误差及其变化率为自变量,找出学习速率与两个自变量之间的函数关系。根据网络的实际学习情况,对学习速率进行动态调整。实例仿真结果表明,改进的BP算法在保持网络稳定性的同时,具有更快的收敛速度。而且,该算法只需恰当地选取几个参数,不受条件限制,因此具有普遍的适用性。  相似文献   

6.
In this paper a new learning algorithm is proposed for the problem of simultaneous learning of a function and its derivatives as an extension of the study of error minimized extreme learning machine for single hidden layer feedforward neural networks. Our formulation leads to solving a system of linear equations and its solution is obtained by Moore-Penrose generalized pseudo-inverse. In this approach the number of hidden nodes is automatically determined by repeatedly adding new hidden nodes to the network either one by one or group by group and updating the output weights incrementally in an efficient manner until the network output error is less than the given expected learning accuracy. For the verification of the efficiency of the proposed method a number of interesting examples are considered and the results obtained with the proposed method are compared with that of other two popular methods. It is observed that the proposed method is fast and produces similar or better generalization performance on the test data.  相似文献   

7.
A critical issue of Neural Network based large-scale data mining algorithms is how to speed up their learning algorithm. This problem is particularly challenging for Error Back-Propagation (EBP) algorithm in Multi-Layered Perceptron (MLP) Neural Networks due to their significant applications in many scientific and engineering problems. In this paper, we propose an Adaptive Variable Learning Rate EBP algorithm to attack the challenging problem of reducing the convergence time in an EBP algorithm, aiming to have a high-speed convergence in comparison with standard EBP algorithm. The idea is inspired from adaptive filtering, which leaded us into two semi-similar methods of calculating the learning rate. Mathematical analysis of AVLR-EBP algorithm confirms its convergence property. The AVLR-EBP algorithm is utilized for data classification applications. Simulation results on many well-known data sets shall demonstrate that this algorithm reaches to a considerable reduction in convergence time in comparison to the standard EBP algorithm. The proposed algorithm, in classifying the IRIS, Wine, Breast Cancer, Semeion and SPECT Heart datasets shows a reduction of the learning epochs relative to the standard EBP algorithm.  相似文献   

8.
冯文江  张立鹏  王菊 《计算机应用》2009,29(6):1497-1516
根据直接序列扩频通信系统中伪随机序列(PN码)的相关特性,提出一种适合于直接序列扩频四相相移键控调制(DS-QPSK)系统的载波相位误差检测方法。PN码具有很好的相关特性,可以简化DS-QPSK信号中的一路信号,进而借鉴二相相移键控(BPSK)信号的反正切鉴相方式完成鉴相。完成了计算机仿真与数字化实现。结果表明,采用该方法,可以降低对接收信噪比的要求,提高接收灵敏度。  相似文献   

9.
10.
张艳萍  纪磊 《计算机应用》2013,33(3):625-627
为了进一步提高指数型变步长常数模算法收敛速度,在分析误差信号自相关性的基础上,利用多延迟误差信号的自相关函数来控制步长,提出一种基于指数型多延迟误差信号自相关的变步长常模算法。该算法与无延迟及单位延迟相比,多延迟误差信号的自相关函数可以为训练轨迹提供简单且更为准确的信息,使得算法的收敛速度更快,同时使收敛过程更加平滑稳定。水声信道仿真实验进一步说明了该算法在收敛速度上的优越性。  相似文献   

11.
《国际计算机数学杂志》2012,89(7):1127-1146
This paper investigates a learning control using iterative error compensation for uncertain systems to enhance the precision of a high speed, computer-controlled machining process. It is specially useful in mass-produced parts produced by a high-speed machine tool system. This method uses an iterative learning technique which adopts machine commands and cutting errors experienced from previous manoeuvres as references for compensation actions in the current manoeuvre. Non-repetitive disturbances and nonlinear dynamics of the cutting processes and servo systems of the machine which greatly affect the convergence of the learning control systems were studied in this research. State feedback and output feedback methods were used for controller design. Stability and performance of learning control systems designed via the proposed method were verified by simulations on a single degree of freedom servo positioning system.  相似文献   

12.
AdaBoost算法是一种典型的集成学习框架,通过线性组合若干个弱分类器来构造成强学习器,其分类精度远高于单个弱分类器,具有很好的泛化误差和训练误差。然而AdaBoost 算法不能精简输出模型的弱分类器,因而不具备良好的可解释性。本文将遗传算法引入AdaBoost算法模型,提出了一种限制输出模型规模的集成进化分类算法(Ensemble evolve classification algorithm for controlling the size of final model,ECSM)。通过基因操作和评价函数能够在AdaBoost迭代框架下强制保留物种样本的多样性,并留下更好的分类器。实验结果表明,本文提出的算法与经典的AdaBoost算法相比,在基本保持分类精度的前提下,大大减少了分类器数量。  相似文献   

13.
This paper introduces a learning problem related to the task of converting printed documents to ASCII text files. The goal of the learning procedure is to produce a function that maps documents to restoration techniques in such a way that on average the restored documents have minimum optical character recognition error. We derive a general form for the optimal function and use it to motivate the development of a nonparametric method based on nearest neighbors. We also develop a direct method of solution based on empirical error minimization for which we prove a finite sample bound on estimation error that is independent of distribution. We show that this empirical error minimization problem is an extension of the empirical optimization problem for traditional M-class classification with general loss function and prove computational hardness for this problem. We then derive a simple iterative algorithm called generalized multiclass ratchet (GMR) and prove that it produces an optimal function asymptotically (with probability 1). To obtain the GMR algorithm we introduce a new data map that extends Kesler's construction for the multiclass problem and then apply an algorithm called Ratchet to this mapped data, where Ratchet is a modification of the Pocket algorithm . Finally, we apply these methods to a collection of documents and report on the experimental results.  相似文献   

14.
This paper presents a new minimum classification error (MCE)–mean square error (MSE) hybrid cost function to enhance the classification ability and speed up the learning process of radial basis function (RBF)-based classifier. Contributed by the MCE function, the proposed cost function enables the RBF-based classifier to achieve an excellent classification performance compared with the conventional MSE function. In addition, certain learning difficulties experienced by the MCE algorithm can be solved in an efficient and simple way. The presented results show that the proposed method exhibits a substantially higher convergence rate compared with the MCE function.  相似文献   

15.
Xie X  Seung HS 《Neural computation》2003,15(2):441-454
Backpropagation and contrastive Hebbian learning are two methods of training networks with hidden neurons. Backpropagation computes an error signal for the output neurons and spreads it over the hidden neurons. Contrastive Hebbian learning involves clamping the output neurons at desired values and letting the effect spread through feedback connections over the entire network. To investigate the relationship between these two forms of learning, we consider a special case in which they are identical: a multilayer perceptron with linear output units, to which weak feedback connections have been added. In this case, the change in network state caused by clamping the output neurons turns out to be the same as the error signal spread by backpropagation, except for a scalar prefactor. This suggests that the functionality of backpropagation can be realized alternatively by a Hebbian-type learning algorithm, which is suitable for implementation in biological networks.  相似文献   

16.
针对基于接收信号强度指示(RSSI)的无线传感器网络(WSNs)节点定位技术易受环境影响、算法运算量大等问题,提出一种基于箱线图的误差自校正定位算法.该算法采用箱线图法处理测距过程中的异常RSSI值,利用自校正最小二乘法消除测距误差进而实现节点定位.仿真和实验结果表明,该算法可以有效抑制异常RSSI值,显著提高节点定位的准确性和稳定性,而且无需建立复杂的数据传播模型或构造RSSI位置指纹分布图.  相似文献   

17.
Neuro-fuzzy approach is known to provide an adaptive method to generate or tune fuzzy rules for fuzzy systems. In this paper, a modified gradient-based neuro-fuzzy learning algorithm is proposed for zero-order Takagi-Sugeno inference systems. This modified algorithm, compared with conventional gradient-based neuro-fuzzy learning algorithm, reduces the cost of calculating the gradient of the error function and improves the learning efficiency. Some weak and strong convergence results for this algorithm are proved, indicating that the gradient of the error function goes to zero and the fuzzy parameter sequence goes to a fixed value, respectively. A constant learning rate is used. Some conditions for the constant learning rate to guarantee the convergence are specified. Numerical examples are provided to support the theoretical findings.  相似文献   

18.
Xia Y  Kamel MS 《Neural computation》2007,19(6):1589-1632
Identification of a general nonlinear noisy system viewed as an estimation of a predictor function is studied in this article. A measurement fusion method for the predictor function estimate is proposed. In the proposed scheme, observed data are first fused by using an optimal fusion technique, and then the optimal fused data are incorporated in a nonlinear function estimator based on a robust least squares support vector machine (LS-SVM). A cooperative learning algorithm is proposed to implement the proposed measurement fusion method. Compared with related identification methods, the proposed method can minimize both the approximation error and the noise error. The performance analysis shows that the proposed optimal measurement fusion function estimate has a smaller mean square error than the LS-SVM function estimate. Moreover, the proposed cooperative learning algorithm can converge globally to the optimal measurement fusion function estimate. Finally, the proposed measurement fusion method is applied to ARMA signal and spatial temporal signal modeling. Experimental results show that the proposed measurement fusion method can provide a more accurate model.  相似文献   

19.
提出一种用于多层前向神经网络的综合反向传播 算法。该算法使用了综合考虑了绝对误上对误差的广义指标函数,采用了在网络输出空间搜索的反传技术。  相似文献   

20.
一种基于小波神经网络故障检测方法的仿真研究   总被引:5,自引:1,他引:4  
文中提出了一种基于小波神经网络一性观测器的故障检测方法。它是一种把信号分析和模型相结合的故障检测方法,通过小波对信号的去噪和神经的神经网络的自学习功能,来获取系统输入输出的非线性动力学特性,进而实时计算出残差并进行逻辑判疡,可提高故障检测的速度和准确率。对同步交流电机的结构损伤故障进行了仿真,结果表明了该方法是可行的。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号