首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 171 毫秒
1.
A new efficient computational technique for training of multilayer feedforward neural networks is proposed. The proposed algorithm consists of two learning phases. The first phase is a local search which implements gradient descent, and the second phase is a direct search scheme which implements dynamic tunneling in weight space avoiding the local trap and thereby generates the point of next descent. The repeated application of these two phases alternately forms a new training procedure which results in a global minimum point from any arbitrary initial choice in the weight space. The simulation results are provided for five test examples to demonstrate the efficiency of the proposed method which overcomes the problem of initialization and local minimum point in multilayer perceptrons.  相似文献   

2.
High-order and multilayer perceptron initialization   总被引:1,自引:0,他引:1  
Proper initialization is one of the most important prerequisites for fast convergence of feedforward neural networks like high-order and multilayer perceptrons. This publication aims at determining the optimal variance (or range) for the initial weights and biases, which is the principal parameter of random initialization methods for both types of neural networks. An overview of random weight initialization methods for multilayer perceptrons is presented. These methods are extensively tested using eight real-world benchmark data sets and a broad range of initial weight variances by means of more than 30000 simulations, in the aim to find the best weight initialization method for multilayer perceptrons. For high-order networks, a large number of experiments (more than 200000 simulations) was performed, using three weight distributions, three activation functions, several network orders, and the same eight data sets. The results of these experiments are compared to weight initialization techniques for multilayer perceptrons, which leads to the proposal of a suitable initialization method for high-order perceptrons. The conclusions on the initialization methods for both types of networks are justified by sufficiently small confidence intervals of the mean convergence times.  相似文献   

3.

Over the past few years, neural networks have exhibited remarkable results for various applications in machine learning and computer vision. Weight initialization is a significant step employed before training any neural network. The weights of a network are initialized and then adjusted repeatedly while training the network. This is done till the loss converges to a minimum value and an ideal weight matrix is obtained. Thus weight initialization directly drives the convergence of a network. Therefore, the selection of an appropriate weight initialization scheme becomes necessary for end-to-end training. An appropriate technique initializes the weights such that the training of the network is accelerated and the performance is improved. This paper discusses various advances in weight initialization for neural networks. The weight initialization techniques in the literature adopted for feed-forward neural network, convolutional neural network, recurrent neural network and long short term memory network have been discussed in this paper. These techniques are classified as (1) initialization techniques without pre-training, which are further classified into random initialization and data-driven initialization, (2) initialization techniques with pre-training. The different weight initialization and weight optimization techniques which select optimal weights for non-iterative training mechanism have also been discussed. We provide a close overview of different initialization schemes in these categories. This paper concludes with discussions on existing schemes and the future scope for research.

  相似文献   

4.
BP算法训练神经网络具有训练易陷入局部极小,收敛速度缓慢的缺点.将动态隧道技术运用到训练BP网络上,可以有效的改进BP网络易陷入局部极小的缺陷,但是传统的动态隧道技术训练BP网络算法在隧道方向具有不稳定性.提出一种多轨道动态隧道技术训练BP网络算法,在原基础上,增加了隧道搜索方向,考察搜索方向之间的相互影响,有效的改进了原算法的搜索效率.还对提出的新算法进行了性能分析,通过两种数据集进行了实验验证,证明其性能优于传统的动态隧道技术训练BP网络算法.  相似文献   

5.
This paper presents a novel Heuristic Global Learning (HER-GBL) algorithm for multilayer neural networks. The algorithm is based upon the least squares method to maintain the fast convergence speed, and the penalized optimization to solve the problem of local minima. The penalty term, defined as a Gaussian-type function of the weight, is to provide an uphill force to escape from local minima. As a result, the training performance is dramatically improved. The proposed HER-GBL algorithm yields excellent results in terms of convergence speed, avoidance of local minima and quality of solution.  相似文献   

6.
A novel improvement in neural network training for pattern classification is presented in this paper. The proposed training algorithm is inspired by the biological metaplasticity property of neurons and Shannon's information theory. This algorithm is applicable to artificial neural networks (ANNs) in general, although here it is applied to a multilayer perceptron (MLP). During the training phase, the artificial metaplasticity multilayer perceptron (AMMLP) algorithm assigns higher values for updating the weights in the less frequent activations than in the more frequent ones. AMMLP achieves a more efficient training and improves MLP performance. The well-known and readily available Wisconsin Breast Cancer Database (WBCD) has been used to test the algorithm. Performance of the AMMLP was tested through classification accuracy, sensitivity and specificity analysis, and confusion matrix analysis. The results obtained by AMMLP are compared with the backpropagation algorithm (BPA) and other recent classification techniques applied to the same database. The best result obtained so far with the AMMLP algorithm is 99.63%.  相似文献   

7.
为了解决前馈神经网络训练收敛速度慢、易陷入局部极值及对初始权值依赖性强等缺点, 提出了一种基于反传的无限折叠迭代混沌粒子群优化(ICMICPSO)算法训练前馈神经网络(FNNs)参数。该方法在充分利用BP算法的误差反传信息和梯度信息的基础上, 引入了ICMIC混沌粒子群的概念, 将ICMIC粒子群(ICMICPS)作为全局搜索器, 梯度下降信息作为局部搜索器来调整网络的权值和阈值, 使得粒子能够在全局寻优的基础上对整个空间进行搜索。通过仿真实验与多种算法进行对比, 结果表明在训练和泛化能力上ICMICPSO-BPNN方法明显优于其他算法。  相似文献   

8.
Parameter Incremental Learning Algorithm for Neural Networks   总被引:1,自引:0,他引:1  
In this paper, a novel stochastic (or online) training algorithm for neural networks, named parameter incremental learning (PIL) algorithm, is proposed and developed. The main idea of the PIL strategy is that the learning algorithm should not only adapt to the newly presented input-output training pattern by adjusting parameters, but also preserve the prior results. A general PIL algorithm for feedforward neural networks is accordingly presented as the first-order approximate solution to an optimization problem, where the performance index is the combination of proper measures of preservation and adaptation. The PIL algorithms for the multilayer perceptron (MLP) are subsequently derived. Numerical studies show that for all the three benchmark problems used in this paper the PIL algorithm for MLP is measurably superior to the standard online backpropagation (BP) algorithm and the stochastic diagonal Levenberg-Marquardt (SDLM) algorithm in terms of the convergence speed and accuracy. Other appealing features of the PIL algorithm are that it is computationally as simple as the BP algorithm, and as easy to use as the BP algorithm. It, therefore, can be applied, with better performance, to any situations where the standard online BP algorithm is applicable  相似文献   

9.
本文对蚁群优化算法的BP神经网络中的RPROP混合算法进行了研究,提出了利用蚁群优化算法,结合RPROP混合算法解决无线网络传感器中如何处理信息服务点中大量的冗余数据、网络运行速度等相关问题,通过建立系统构架及信息服务点,证明该算法能够延长BP神经网络的生命周期,加快BP神经网络的收缩速度,能够将网络中信息服务点的重复数据进行有效的合并处理,并及时过滤掉非正常信息服务点的数据,减少数据服务点的能量消耗,期训练过程中迭代次数改善明显,解决BP神经网络的学习、训练时间冗余等问题,同时具有较强的计算、寻优等能力,提高了网络分类正确率和运行的效率,是一种较为实用的算法,完全能够满足日益增长的无线互联网终端的运行需要。  相似文献   

10.
This paper introduces a methodology for neural network global optimization. The aim is the simultaneous optimization of multilayer perceptron (MLP) network weights and architectures, in order to generate topologies with few connections and high classification performance for any data sets. The approach combines the advantages of simulated annealing, tabu search and the backpropagation training algorithm in order to generate an automatic process for producing networks with high classification performance and low complexity. Experimental results obtained with four classification problems and one prediction problem has shown to be better than those obtained by the most commonly used optimization techniques  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号