首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 734 毫秒
1.
The backpropagation algorithm is one of the most used tools for training artificial neural networks. However, this tool may be very slow in some practical applications. Many techniques have been discussed to speed up the performance of this algorithm and allow its use in an even broader range of applications. Although the backpropagation algorithm has been used for decades, we present here a set of computational results that suggest that by replacing bihyperbolic functions the backpropagation algorithm performs better than the traditional sigmoid functions. To the best of our knowledge, this finding was never previously published in the open literature. The efficiency and discrimination capacity of the proposed methodology are shown through a set of computational experiments, and compared with the traditional problems of the literature.  相似文献   

2.
This paper presents an application of artificial neural networks (ANNs) for the prediction of traction force using readily available datasets experimentally obtained from a soil bin utilizing single-wheel tester. Aiming this, firstly the tests were carried out using two soil textures and two tire types as affected by velocity, slippage, tire inflation pressure, and wheel load. On this basis, the potential of neural modeling was assessed with multilayered perceptron networks using various training algorithms among which, backpropagation algorithm was compared to backpropagation with declining learning rate factor algorithm due to their primarily yielded superior performance. The results divulged that the latter one could better achieve the aim of study in terms of performance criteria. Furthermore, it was inferred that ANNs could reliably provide a promising tool for prediction of traction force and its modeling.  相似文献   

3.
In this paper radial basis function (RBF) networks are used to model general non-linear discrete-time systems. In particular, reciprocal multiquadric functions are used as activation functions for the RBF networks. A stepwise regression algorithm based on orthogonalization and a series of statistical tests is employed for designing and training of the network. The identification method yields non-linear models, which are stable and linear in the model parameters. The advantages of the proposed method compared to other radial basis function methods and backpropagation neural networks are described. Finally, the effectiveness of the identification method is demonstrated by the identification of two non-linear chemical processes, a simulated continuous stirred tank reactor and an experimental pH neutralization process.  相似文献   

4.
The paper presents novel modifications to radial basis functions (RBFs) and a neural network based classifier for holistic recognition of the six universal facial expressions from static images. The new basis functions, called cloud basis functions (CBFs) use a different feature weighting, derived to emphasize features relevant to class discrimination. Further, these basis functions are designed to have multiple boundary segments, rather than a single boundary as for RBFs. These new enhancements to the basis functions along with a suitable training algorithm allow the neural network to better learn the specific properties of the problem domain. The proposed classifiers have demonstrated superior performance compared to conventional RBF neural networks as well as several other types of holistic techniques used in conjunction with RBF neural networks. The CBF neural network based classifier yielded an accuracy of 96.1%, compared to 86.6%, the best accuracy obtained from all other conventional RBF neural network based classification schemes tested using the same database.  相似文献   

5.
贺昱曜  张慧档 《计算机工程》2008,34(23):208-209
为构建径向基函数神经网络模型,以相空间重构理论为基础,提出基于粒子群的自动搜索算法,并以Logistic映射和水声信号作为研究对象,把该算法与同类算法进行比较。实验结果表明,该算法在训练准确率和收敛速度方面体现出一定的优越性,能够为水声信号的建模、预测以及动力学分析提供支持。  相似文献   

6.
A formal selection and pruning technique based on the concept of local relative sensitivity index is proposed for feedforward neural networks. The mechanism of backpropagation training algorithm is revisited and the theoretical foundation of the improved selection and pruning technique is presented. This technique is based on parallel pruning of weights which are relatively redundant in a subgroup of a feedforward neural network. Comparative studies with a similar technique proposed in the literature show that the improved technique provides better pruning results in terms of reduction of model residues, improvement of generalization capability and reduction of network complexity. The effectiveness of the improved technique is demonstrated in developing neural network models of a number of nonlinear systems including three bit parity problem, Van der Pol equation, a chemical processes and two nonlinear discrete-time systems using the backpropagation training algorithm with adaptive learning rate.  相似文献   

7.
Speeding up backpropagation using multiobjective evolutionary algorithms   总被引:3,自引:0,他引:3  
Abbass HA 《Neural computation》2003,15(11):2705-2726
The use of backpropagation for training artificial neural networks (ANNs) is usually associated with a long training process. The user needs to experiment with a number of network architectures; with larger networks, more computational cost in terms of training time is required. The objective of this letter is to present an optimization algorithm, comprising a multiobjective evolutionary algorithm and a gradient-based local search. In the rest of the letter, this is referred to as the memetic Pareto artificial neural network algorithm for training ANNs. The evolutionary approach is used to train the network and simultaneously optimize its architecture. The result is a set of networks, with each network in the set attempting to optimize both the training error and the architecture. We also present a self-adaptive version with lower computational cost. We show empirically that the proposed method is capable of reducing the training time compared to gradient-based techniques.  相似文献   

8.
An ART-based construction of RBF networks   总被引:1,自引:0,他引:1  
Radial basis function (RBF) networks are widely used for modeling a function from given input-output patterns. However, two difficulties are involved with traditional RBF (TRBF) networks: The initial configuration of an RBF network needs to be determined by a trial-and-error method, and the performance suffers when the desired output has abrupt changes or constant values in certain intervals. We propose a novel approach to over. come these difficulties. New kernel functions are used for hidden nodes, and the number of nodes is determined automatically by an adaptive resonance theory (ART)-like algorithm. Parameters and weights are initialized appropriately, and then tuned and adjusted by the gradient-descent method to improve the performance of the network. Experimental results have shown that the RBF networks constructed by our method have a smaller number of nodes, a faster learning speed, and a smaller approximation error than the networks produced by other methods.  相似文献   

9.
This work presents a new sequential learning algorithm for radial basis function (RBF) networks referred to as generalized growing and pruning algorithm for RBF (GGAP-RBF). The paper first introduces the concept of significance for the hidden neurons and then uses it in the learning algorithm to realize parsimonious networks. The growing and pruning strategy of GGAP-RBF is based on linking the required learning accuracy with the significance of the nearest or intentionally added new neuron. Significance of a neuron is a measure of the average information content of that neuron. The GGAP-RBF algorithm can be used for any arbitrary sampling density for training samples and is derived from a rigorous statistical point of view. Simulation results for bench mark problems in the function approximation area show that the GGAP-RBF outperforms several other sequential learning algorithms in terms of learning speed, network size and generalization performance regardless of the sampling density function of the training data.  相似文献   

10.
A pruning based robust backpropagation training algorithm is proposed for the online tuning of the Radial Basis Function(RBF) network tracking control system. The structure of the RBF network controller is derived using a filtered error approach. The proposed method in this paper begins with a relatively large network, and certain neural units of the RBF network are dropped by examining the estimation error increment. A complete convergence proof is provided in the presence of disturbance.  相似文献   

11.
针对传统径向基函数(RBF)网络难以确定迭代停止条件的缺点,提出采用最小化留一误差来训练多尺度RBF网络的算法。分别使用全局k均值聚类算法和经验选择方法,构造RBF节点的中心和尺度参数备选项集合,利用正交前向选择方法逐步最小化留一误差,从而确定网络的每一项中心和尺度参数。实验结果显示,该算法能够自动终止新网络节点选择,不需要额外的迭代终止条件,与传统的RBF网络相比,能够产生稀疏性更高且泛化能力更好的径向基网络。  相似文献   

12.
李鹏华  柴毅  熊庆宇 《自动化学报》2013,39(9):1511-1522
针对Elman神经网络的学习速度和泛化性能, 提出一种具有量子门结构的新型Elman神经网络模型及其梯度扩展反向传播(Back-propagation)学习算法, 新模型由量子比特神经元和经典神经元构成. 新网络结构采用量子映射层以确保来自上下文单元的局部反馈与隐藏层输入之间的模式一致; 通过量子比特神经元输出与相关量子门参数的修正互补关系以提高网络更新动力. 新学习算法采用搜索然后收敛的策略自适应地调整学习率参数以提高网络学习速度; 通过将上下文单元的权值扩展到隐藏层的权值矩阵, 使其在与隐藏层权值同步更新过程中获取时间序列的额外信息, 从而提高网络上下文单元输出与隐藏层输入之间的匹配程度. 以峰值检波为例的数值实验结果显示, 在量子反向传播学习过程中, 量子门Elman神经网络具有较快的学习速度和良好的泛化性能.  相似文献   

13.
Training a classifier with good generalization capability is a major issue for pattern classification problems. A novel training objective function for Radial Basis Function (RBF) network using a localized generalization error model (L-GEM) is proposed in this paper. The localized generalization error model provides a generalization error bound for unseen samples located within a neighborhood that contains all training samples. The assumption of the same width for all dimensions of a hidden neuron in L-GEM is relaxed in this work. The parameters of RBF network are selected via minimization of the proposed objective function to minimize its localized generalization error bound. The characteristics of the proposed objective function are compared with those for regularization methods. For weight selection, RBF networks trained by minimizing the proposed objective function consistently outperform RBF networks trained by minimizing the training error, Tikhonov Regularization, Weight Decay or Locality Regularization. The proposed objective function is also applied to select center, width and weight in RBF network simultaneously. RBF networks trained by minimizing the proposed objective function yield better testing accuracies when compared to those that minimizes training error only.  相似文献   

14.
To avoid oversized feedforward networks we propose that after Cascade-Correlation learning the network is fine-tuned with backpropagation algorithm. Our experiments show that if one uses merely Cascade-Correlation learning the network may require a large number of hidden units to reach the desired error level. However, if the network is in addition fine-tuned with backpropagation method then the desired error level can be reached with much smaller number of hidden units. It is also shown that the combined Cascade-Correlation backpropagation training is a faster scheme compared to mere backpropagation training.  相似文献   

15.
In this paper a new methodology for training radial basis function (RBF) neural networks is introduced and examined. This novel approach, called Fuzzy-OSD, could be used in applications, which need real-time capabilities for retraining neural networks. The proposed method uses fuzzy clustering in order to improve the functionality of the Optimum Steepest Descent (OSD) learning algorithm. This improvement is due to initialization of RBF units more precisely using fuzzy C-Means clustering algorithm that results in producing better and the same network response in different retraining attempts. In addition, adjusting RBF units in the network with great accuracy will result in better performance in fewer train iterations, which is essential when fast retraining of the network is needed, especially in the real-time systems. We employed this new method in an online radar pulse classification system, which needs quick retraining of the network once new unseen emitters detected. Having compared result of applying the new algorithm and Three-Phase OSD method to benchmark problems from Proben1 database and also using them in our system, we achieved improvement in the results as presented in this paper.  相似文献   

16.
We present an algorithmic variant of the simplified fuzzy ARTMAP (SFAM) network, whose structure resembles those of feed-forward networks. Its difference with Kasuba's model is discussed, and their performances are compared on two benchmarks. We show that our algorithm is much faster than Kasuba's algorithm, and by increasing the number of training samples, the difference in speed grows enormously.The performances of the SFAM and the MLP (multilayer perceptron) are compared on three problems: the two benchmarks, and the Farsi optical character recognition (OCR) problem. For training the MLP two different variants of the backpropagation algorithm are used: the BPLRF algorithm (backpropagation with plummeting learning rate factor) for the benchmarks, and the BST algorithm (backpropagation with selective training) for the Farsi OCR problem.The results obtained on all of the three case studies with the MLP and the SFAM, embedded in their customized systems, show that the SFAM's convergence in fast-training mode, is faster than that of MLP, and online operation of the MLP is faster than that of the SFAM. On the benchmark problems the MLP has much better recognition rate than the SFAM. On the Farsi OCR problem, the recognition error of the SFAM is higher than that of the MLP on ill-engineered datasets, but equal on well-engineered ones. The flexible configuration of the SFAM, i.e. its capability to increase the size of the network in order to learn new patterns, as well as its simple parameter adjustment, remain unchallenged by the MLP.  相似文献   

17.
This paper describes four neural networks multilayer perceptron (MLP) network, Elman network, NARXSP network and radial basis function (RBF) network. Neural networks are applied for identification and control of DC servo motor and benchmark nonlinear system. Number of epochs required and time taken to train the controller are shown in the form of bar plots for four neural networks. Levenberg-Marquardt algorithm is used for training the controller using neural network toolbox in MATLAB. Each neural network controller is run ten times. Their performances are compared for each run in terms of number of epochs required and time taken to train each controller for tracking a reference trajectory.  相似文献   

18.
针对RBF神经网络隐含层节点数过多导致网络结构复杂的问题,提出了一种基于改进遗传算法(IGA)的RBF神经网络优化算法。利用IGA优化基于正交最小二乘法的RBF神经网络结构,通过对隐含层输出矩阵的列向量进行全局寻优,从而设计出结构更优的基于IGA的RBF神经网络(IGA-RBF)。将IGA-RBF神经网络的学习算法应用于电子元器件贮存环境温湿度预测模型,与基于正交最小二乘法的RBF神经网络进行比较的结果表明:IGA-RBF神经网络设计出来的网络训练步数减少了44步,隐含层节点数减少了34个,且预测模型得到的温湿度误差较小,拟合精度大于0.95,具有更高的预测精度。  相似文献   

19.
The relationship between backpropagation and extended Kalman filtering for training multilayer perceptrons is examined. These two techniques are compared theoretically and empirically using sensor imagery. Backpropagation is a technique from neural networks for assigning weights in a multilayer perceptron. An extended Kalman filter can also be used for this purpose. A brief review of the multilayer perceptron and these two training methods is provided. Then, it is shown that backpropagation is a degenerate form of the extended Kalman filter. The training rules are compared in two examples: an image classification problem using laser radar Doppler imagery and a target detection problem using absolute range images. In both examples, the backpropagation training algorithm is shown to be three orders of magnitude less costly than the extended Kalman filter algorithm in terms of a number of floating-point operations  相似文献   

20.
Recent research has linked backpropagation (BP) and radial basis function (RBF) network classifiers, trained by minimizing the standard mean square error (MSE), to two main topics in statistical pattern recognition (SPR), namely the Bayes decision theory and discriminant analysis. However, so far, the establishment of these links has resulted in only a few practical applications for training, using, and evaluating these classifiers. The paper aims at providing more of these applications. It first illustrates that while training a linear output BP network, the explicit utilization of the network discriminant capability leads to an improvement in its classification performance. Then, for linear output BP and RBF networks, the paper defines a new generalization measure that provides information about the closeness of the network classification performance to the optimal performance. The estimation procedure of this measure is described and its use as an efficient criterion for terminating the learning algorithm and choosing the network topology is explained. The paper finally proposes an upper bound on the number of hidden units needed by an RBF network classifier to achieve an arbitrary value of the minimized MSE. Experimental results are presented to validate all proposed applications.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号