首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
A novel learning algorithm, the Recurrent Neural Network Constrained Optimization Method (RENNCOM) is suggested in this paper, for training block-diagonal recurrent neural networks. The training task is formulated as a constrained optimization problem, whose objective is twofold: (1) minimization of an error measure, leading to successful approximation of the input/output mapping and (2) optimization of an additional functional, the payoff function, which aims at ensuring network stability throughout the learning process. Having assured the network and training stability conditions, the payoff function is switched to an alternative form with the scope to accelerate learning. Simulation results on a benchmark identification problem demonstrate that, compared to other learning schemes with stabilizing attributes, the RENNCOM algorithm has enhanced qualities, including, improved speed of convergence, accuracy and robustness. The proposed algorithm is also applied to the problem of the analysis of lung sounds. Particularly, a filter based on block-diagonal recurrent neural networks is developed, trained with the RENNCOM method. Extensive experimental results are given and performance comparisons with a series of other models are conducted, underlining the effectiveness of the proposed filter.  相似文献   

2.
In recent years, a recurrent neural network called projection neural network was proposed for solving monotone variational inequalities and related convex optimization problems. In this paper, we show that the projection neural network can also be used to solve pseudomonotone variational inequalities and related pseudoconvex optimization problems. Under various pseudomonotonicity conditions and other conditions, the projection neural network is proved to be stable in the sense of Lyapunov and globally convergent, globally asymptotically stable, and globally exponentially stable. Since monotonicity is a special case of pseudomononicity, the projection neural network can be applied to solve a broader class of constrained optimization problems related to variational inequalities. Moreover, a new concept, called componentwise pseudomononicity, different from pseudomononicity in general, is introduced. Under this new concept, two stability results of the projection neural network for solving variational inequalities are also obtained. Finally, numerical examples show the effectiveness and performance of the projection neural network  相似文献   

3.
Recently, a projection neural network for solving monotone variational inequalities and constrained optimization problems was developed. In this paper, we propose a general projection neural network for solving a wider class of variational inequalities and related optimization problems. In addition to its simple structure and low complexity, the proposed neural network includes existing neural networks for optimization, such as the projection neural network, the primal-dual neural network, and the dual neural network, as special cases. Under various mild conditions, the proposed general projection neural network is shown to be globally convergent, globally asymptotically stable, and globally exponentially stable. Furthermore, several improved stability criteria on two special cases of the general projection neural network are obtained under weaker conditions. Simulation results demonstrate the effectiveness and characteristics of the proposed neural network.  相似文献   

4.
Neural network for quadratic optimization with bound constraints   总被引:20,自引:0,他引:20  
A recurrent neural network is presented which performs quadratic optimization subject to bound constraints on each of the optimization variables. The network is shown to be globally convergent, and conditions on the quadratic problem and the network parameters are established under which exponential asymptotic stability is achieved. Through suitable choice of the network parameters, the system of differential equations governing the network activations is preconditioned in order to reduce its sensitivity to noise and to roundoff errors. The optimization method employed by the neural network is shown to fall into the general class of gradient methods for constrained nonlinear optimization and, in contrast with penalty function methods, is guaranteed to yield only feasible solutions.  相似文献   

5.
利用Hopfield网络的优化计算功能求解有约束多变量动态知阵控制问题,算法简单,收敛性好,即或以用硬件实时实现,也可用数值积分求解,对Wood-Berry精馏塔的仿真表明了算法的有效性。  相似文献   

6.
改进的粒子群神经网络检测种蛋成活性   总被引:1,自引:0,他引:1  
提出了一种基于改进粒子群神经网络进行孵化种蛋成活性自动检测的方法.提取HSI图像的H分量作为孵化种蛋表面颜色特征,通过主成分分析,找到了6个主成分特征向量,减少了神经网络输入节点数.利用改进粒子群算法优化多层前馈神经网络的拓扑结构,提高了神经网络的学习质量和速度.训练集的样本具有足够代表性和全面性,提高了网络的泛化能力.实验证明,该方法检测准确性较高,具有鲁棒性和高效率.  相似文献   

7.
当神经网络应用于最优化计算时,理想的情形是只有一个全局渐近稳定的平衡点,并且以指数速度趋近于平衡点,从而减少神经网络所需计算时间.研究了带时变时滞的递归神经网络的全局渐近稳定性.首先将要研究的模型转化为描述系统模型,然后利用Lyapunov-Krasovskii稳定性定理、线性矩阵不等式(LMI)技术、S过程和代数不等式方法,得到了确保时变时滞递归神经网络渐近稳定性的新的充分条件,并将它应用于常时滞神经网络和时滞细胞神经网络模型,分别得到了相应的全局渐近稳定性条件.理论分析和数值模拟显示,所得结果为时滞递归神经网络提供了新的稳定性判定准则.  相似文献   

8.
针对RBF神经网络隐含层节点数过多导致网络结构复杂的问题,提出了一种基于改进遗传算法(IGA)的RBF神经网络优化算法。利用IGA优化基于正交最小二乘法的RBF神经网络结构,通过对隐含层输出矩阵的列向量进行全局寻优,从而设计出结构更优的基于IGA的RBF神经网络(IGA-RBF)。将IGA-RBF神经网络的学习算法应用于电子元器件贮存环境温湿度预测模型,与基于正交最小二乘法的RBF神经网络进行比较的结果表明:IGA-RBF神经网络设计出来的网络训练步数减少了44步,隐含层节点数减少了34个,且预测模型得到的温湿度误差较小,拟合精度大于0.95,具有更高的预测精度。  相似文献   

9.
复杂运动目标的学习与识别   总被引:1,自引:0,他引:1       下载免费PDF全文
针对复杂运动目标识别问题,提出了一个基于反馈型随机神经网络的运动认脸与物体的自动识别系统,该系统具有强大学习能力,运动目标检测与识别快速准确等特点,在对该的核心-反馈型二元网络进行深入分析的基础上,提出了一种适合于该神经网络模型的高效渐进式Boltzmann学习算法,实验结果表明,该识别系统性能优异,在几个方面超过了eTrue公司的TrueFace人脸识别系统。  相似文献   

10.
Second-order neural nets for constrained optimization   总被引:2,自引:0,他引:2  
Analog neural nets for constrained optimization are proposed as an analogue of Newton's algorithm in numerical analysis. The neural model is globally stable and can converge to the constrained stationary points. Nonlinear neurons are introduced into the net, making it possible to solve optimization problems where the variables take discrete values, i.e., combinatorial optimization.  相似文献   

11.
This paper presents a continuous-time recurrent neural-network model for nonlinear optimization with any continuously differentiable objective function and bound constraints. Quadratic optimization with bound constraints is a special problem which can be solved by the recurrent neural network. The proposed recurrent neural network has the following characteristics. 1) It is regular in the sense that any optimum of the objective function with bound constraints is also an equilibrium point of the neural network. If the objective function to be minimized is convex, then the recurrent neural network is complete in the sense that the set of optima of the function with bound constraints coincides with the set of equilibria of the neural network. 2) The recurrent neural network is primal and quasiconvergent in the sense that its trajectory cannot escape from the feasible region and will converge to the set of equilibria of the neural network for any initial point in the feasible bound region. 3) The recurrent neural network has an attractivity property in the sense that its trajectory will eventually converge to the feasible region for any initial states even at outside of the bounded feasible region. 4) For minimizing any strictly convex quadratic objective function subject to bound constraints, the recurrent neural network is globally exponentially stable for almost any positive network parameters. Simulation results are given to demonstrate the convergence and performance of the proposed recurrent neural network for nonlinear optimization with bound constraints.  相似文献   

12.
Liu  Tong  Liang  Shan  Xiong  Qingyu  Wang  Kai 《Neural Processing Letters》2019,50(3):2161-2182

This paper proposes a diagonal recurrent neural network (DRNN) based identification scheme to handle the complexity and nonlinearity of high-power continuous microwave heating system (HPCMHS). The new DRNN design involves a two-stage training process that couples an efficient forward model selection technique with gradient-based optimization. In the first stage, an impact recurrent network structure is obtained by a fast recursive algorithm in a stepwise forward procedure. To ensure stability, update rules are further developed using Lyapunov stability criterion to tune parameters of reduced size model at the second stage. The proposed approach is tested with an experimental regression problem and a practical HPCMHS identification, and the results are compared with four typical network models. The results show that the new design demonstrates improved accuracy and model compactness with reduced computational complexity over the existing methods.

  相似文献   

13.
针对神经网络权值选取不精确的问题,提出改进的粒子群优化算法结合BP神经网络动态选取权值的方法。在改进的粒子群优化算法中,采用动态惯性权重,并且认知参数与社会参数相互制约。同时,改进的粒子群优化算法结合差分进化算法使粒子拥有变异与交叉操作,保持粒子的多样性。基于改进的粒子群优化算法与BP神经网络,构建IPSONN神经网络模型并运用于酒类品质的预测。实验分别从训练精度、正确率及粒子多样性三方面验证了IPSONN模型的有效性。  相似文献   

14.
Elman神经网络是一种典型的递归神经网络。提出了自适应量子粒子群优化(Adaptive Quantum-Behaved Particle Swarm Optimization,AQPSO)算法,用于训练Elman网络参数,改进了Elman网络的泛化能力。利用中集集团股票数据进行预测,实验结果表明,采用AQPSO算法获得的Elman网络模型不但具有很强的泛化能力,而且具有良好的稳定性,在股票数据预测中具有一定的实用价值。  相似文献   

15.
This paper presents an efficient approach based on a recurrent neural network for solving constrained nonlinear optimization. More specifically, a modified Hopfield network is developed, and its internal parameters are computed using the valid-subspace technique. These parameters guarantee the convergence of the network to the equilibrium points that represent an optimal feasible solution. The main advantage of the developed network is that it handles optimization and constraint terms in different stages with no interference from each other. Moreover, the proposed approach does not require specification for penalty and weighting parameters for its initialization. A study of the modified Hopfield model is also developed to analyse its stability and convergence. Simulation results are provided to demonstrate the performance of the proposed neural network.  相似文献   

16.
本文对蚁群优化算法的BP神经网络中的RPROP混合算法进行了研究,提出了利用蚁群优化算法,结合RPROP混合算法解决无线网络传感器中如何处理信息服务点中大量的冗余数据、网络运行速度等相关问题,通过建立系统构架及信息服务点,证明该算法能够延长BP神经网络的生命周期,加快BP神经网络的收缩速度,能够将网络中信息服务点的重复数据进行有效的合并处理,并及时过滤掉非正常信息服务点的数据,减少数据服务点的能量消耗,期训练过程中迭代次数改善明显,解决BP神经网络的学习、训练时间冗余等问题,同时具有较强的计算、寻优等能力,提高了网络分类正确率和运行的效率,是一种较为实用的算法,完全能够满足日益增长的无线互联网终端的运行需要。  相似文献   

17.
采用变尺度混沌优化方法代替梯度下降法融入BP神经网络,在优化搜索过程中不断缩小搜索空间,克服了标准BP算法易陷入局部极小的缺点,能有效地寻找到BP神经网络权值的全局最优值。此外,进一步提出变尺度混沌优化与梯度下降法有机结合的算法,能有效缩短单一的变尺度混沌优化BP算法的训练时间。仿真结果表明,改进的BP神经网络具有实现简单、寻优性强和优化效率高等特点。  相似文献   

18.
针对瓦斯煤尘爆炸和煤与瓦斯突出给煤炭矿山企业带来的危害极大的问题,将蚁群优化算法和BP神经网络技术结合应用到瓦斯涌出量预测,建立比较准确的预测模型。重点研究了BP网络模型的选择与优化训练,通过蚁群算法优化解决了BP神经网络易陷入局部收敛的问题。仿真与实际数据验证表明:改进的神经网络算法对瓦斯涌出量预测能达到良好的效果。  相似文献   

19.
针对BP神经网络对初始权重敏感,容易陷入局部最优,人工蜂群算法局部搜索能力和开发能力相对较弱等问题,提出一种基于改进人工蜂群和反向传播的神经网络训练方法。引进差分进化思想改进人工蜂群算法,并对跟随蜂的搜索行为进行更准确的描述。用改进的人工蜂群全局搜索神经网络的初始权重,防止神经网络陷入局部最优。用新的方法对神经网络训练进行分类。实验结果表明,该算法相对于标准的BP神经网络,有效提高了分类正确率,泛化能力较强。  相似文献   

20.

Over the past few years, neural networks have exhibited remarkable results for various applications in machine learning and computer vision. Weight initialization is a significant step employed before training any neural network. The weights of a network are initialized and then adjusted repeatedly while training the network. This is done till the loss converges to a minimum value and an ideal weight matrix is obtained. Thus weight initialization directly drives the convergence of a network. Therefore, the selection of an appropriate weight initialization scheme becomes necessary for end-to-end training. An appropriate technique initializes the weights such that the training of the network is accelerated and the performance is improved. This paper discusses various advances in weight initialization for neural networks. The weight initialization techniques in the literature adopted for feed-forward neural network, convolutional neural network, recurrent neural network and long short term memory network have been discussed in this paper. These techniques are classified as (1) initialization techniques without pre-training, which are further classified into random initialization and data-driven initialization, (2) initialization techniques with pre-training. The different weight initialization and weight optimization techniques which select optimal weights for non-iterative training mechanism have also been discussed. We provide a close overview of different initialization schemes in these categories. This paper concludes with discussions on existing schemes and the future scope for research.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号