首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
为了克服惩罚函数法存在的罚参数难以选择和控制的主要缺陷,利用个体违反约束条件的程度函数,定义了约束强度指标,并设计了一种新的具有较强全局搜索能力的多父体杂交算子,从而提出一种基于约束强度的有效的演化算法.通过数值验证比较其性能优于现有的一些约束单目标优化演化算法.  相似文献   

2.
一种基于自适应遗传算法的神经网络学习算法   总被引:3,自引:12,他引:3  
结合遗传算法与梯度下降法优点,提出了一种训练神经网络权值的混合优化算法,同时能够优化网络的结构.首先利用全局搜索能力可靠的遗传算法,采用递阶编码方案和自适应变异概率,同时优化网络的权值和结构,在进化结束时,能够寻到全局最优点附近的点.在遗传算法搜索结果的基础上,利用局部寻优能力较强的梯度下降法,从此点出发,进行局部搜索,最终达到网络的训练目标.与单一的遗传算法或者梯度下降法比较而言,混合优化算法的收敛速度明显提高.  相似文献   

3.
针对非线性不等式状态约束滤波问题,提出一种基于序列二次规划的迭代不敏卡尔曼滤波算法。在迭代不敏卡尔曼滤波的基础上,采用序列二次规划优化法求解非线性不等式约束条件下的最优解。通过对每一次迭代求解二次规划子问题来确定下降方向,重复该步骤直到求得原问题的解,利用效益函数对目标函数最小化和不等式约束条件进行权衡,以保证算法的收敛性,利用正定矩阵近似海森矩阵降低时间复杂度。对具有约束的航路跟踪系统进行实验仿真,结果表明,该算法在处理非线性不等式状态约束滤波问题时,能够有效地提高状态估计精度,获得较高的滤波精度,且时间复杂度较低。  相似文献   

4.
针对带有线性等式和不等式约束的无确定函数形式的约束优化问题,提出一种利用梯度投影法与遗传算法、同时扰动随机逼近等随机算法相结合的优化方法。该方法利用遗传算法进行全局搜索,利用同时扰动随机逼近算法进行局部搜索,算法在每次进化时根据线性约束计算父个体处的梯度投影方向,以产生新个体,从而能够严格保证新个体满足全部约束条件。将上述约束优化算法应用于典型约束优化问题,其仿真结果表明了所提出算法的可行性和收敛性。  相似文献   

5.
一种基于自适应遗传算法的神经网络学习算法   总被引:5,自引:3,他引:5  
结合遗传算法与梯度下降法优点,提出了一种训练神经网络权值的混合优化算法,同时能够优化网络的结构。首先利用全局搜索能力可靠的遗传算法,采用递阶编码方案和自适应变异概率,同时优化网络的权值和结构,在进化结束时,能够寻到全局最优点附近的点。在遗传算法搜索结果的基础上,利用局部寻优能力较强的梯度下降法,从此点出发,进行局部搜索,最终达到网络的训练目标。与单一的遗传算法或者梯度下降法比较而言,混合优化算法的收敛速度明显提高。  相似文献   

6.
为了避免奇异解,提高网络性能,给出一种回声状态网络的权值初始化方法(WIESN).利用柯西不等式和线性代数确定优化的初始权值的范围与输入维数、储备池维数、输入变量和储备池状态相关,从而确保神经元的输出位于sigmoid函数的激活区域.实验结果表明,权值初始化方法的精度和训练时间要优于随机初始化方法,且相比于训练时间,权值初始化的时间是可以忽略不计的.  相似文献   

7.
针对函数可微的全局优化问题,将最速下降法,Newton法和罚函数法引入模拟退火算法中,提出了一种高效的模拟退火算法.该算法可以求得可微函数优化问题的全局最优解,且具有计算量小,效率高的特点.利用罚函数将约束优化问题转化为无约束优化问题后,可以利用提出的算法进行求解.数值算例表明,提出的算法能够高效地求解无约束及带约束的函数可微的全局优化问题.  相似文献   

8.
针对不等式约束条件下,目标函数和约束条件中含有参数的线性规划问题,提出一种基于新型光滑精确罚函数的神经网络计算方法.引入误差函数构造单位阶跃函数的近似函数,给出一种更加精确地逼近于Ll精确罚函数的光滑罚函数,讨论了其基本性质;利用所提光滑精确罚函数建立了求解参数线性规划问题的神经网络模型,证明了该网络模型的稳定性和收敛性,并给出了详细的算法步骤.数值仿真验证了所提方法具有罚因子取值小、结构简单、计算精度高等优点.  相似文献   

9.
基于混合粒子群算法的烧结配料优化   总被引:1,自引:0,他引:1  
在引入惩罚函数和对目标函数进行适当修改的前提下,充分利用粒子群优化算法的全局搜索能力和约束条件下共轭梯度法的局部搜索能力,设计了烧结配料优化算法.利用惩罚函数方法将约束条件优化问题转化为无约束条件优化问题,然后利用粒子群优化算法进行寻优.当群体最优信息陷入停滞时将目标函数进行适当变化,继续利用共轭梯度法进行寻优.计算结果表明,采用该方法能够在提高混合料中的有用成分、降低有害成分的前提下,更多地降低生产成本.  相似文献   

10.
非光滑伪凸优化问题广泛应用于科学及工程等领域,属于一类特殊的非凸优化问题,具有重要的研究意义.针对含有等式约束和不等式约束条件的非光滑伪凸优化问题,该文提出了一种新的神经动力学方法,并引入罚函数和正则化思想.通过有效的罚函数保证了所提出的神经网络状态的有界性,从而保证神经网络的状态解在有限时间内进入可行域中,最终收敛到原问题的最优解.最后,用两个数值实验验证了所提出模型的有效性.与现有的神经网络相比,该文的模型有以下优势:避免预先计算精确的惩罚因子,初始点的选取无特殊要求,结构简单.  相似文献   

11.
We present a stochastic approximation algorithm based on penalty function method and a simultaneous perturbation gradient estimate for solving stochastic optimisation problems with general inequality constraints. We present a general convergence result that applies to a class of penalty functions including the quadratic penalty function, the augmented Lagrangian, and the absolute penalty function. We also establish an asymptotic normality result for the algorithm with smooth penalty functions under minor assumptions. Numerical results are given to compare the performance of the proposed algorithm with different penalty functions.  相似文献   

12.
The paper is concerned with methods for solution of conventional optimization and mathematical programming problems [1–4] using discontinuous control algorithms. Deliberate introduction of sliding modes [5–7] into the system ensures that the output of the plant to be optimized follows the monotonically decreasing reference input, in the case of minimization, and that the constraints on the plant input parameters hold. This approach eliminates the need in hardware to obtain information on the gradients of the function to be optimized or constraining functions.The paper separately treats optimization algorithms without constraints on output parameters and with constraints of the equality and inequality types. The relation of the methods reported in the paper and gradient procedures for solution of mathematical programming problems using a special kind of penalty function is indicated.  相似文献   

13.
This paper presents an algorithm for the minimization of a nonlinear objective function subject to nonlinear inequality and equality constraints. The proposed method has the two distinguishing properties that, under weak assumptions, it converges to a Kuhn-Tucker point for the problem and under somewhat stronger assumptions, the rate of convergence is quadratic. The method is similar to a recent method proposed by Rosen in that it begins by using a penalty function approach to generate a point in a neighborhood of the optimum and then switches to Robinson's method. The new method has two new features not shared by Rosen's method. First, a correct choice of penalty function parameters is constructed automatically, thus guaranteeing global convergence to a stationary point. Second, the linearly constrained subproblems solved by the Robinson method normally contain linear inequality constraints while for the method presented here, only linear equality constraints are required. That is, in a certain sense, the new method “knows” which of the linear inequality constraints will be active in the subproblems. The subproblems may thus be solved in an especially efficient manner. Preliminary computational results are presented.  相似文献   

14.
One of the most important issues in developing an evolutionary optimization algorithm is the proper handling of any constraints on the problem. One must balance the objective function against the degree of constraint violation in such a way that neither is dominant. Common approaches to strike this balance include implementing a penalty function and the more recent stochastic ranking method, but these methods require an extra tuning parameter which must be chosen by the user. The present paper demonstrates that a proper balance can be achieved using an addition of ranking method. Through the ranking of the individuals with respect to the objective function and constraint violation independently, we convert these two properties into numerical values of the same order of magnitude. This removes the requirement of a user-specified penalty coefficient or any other tuning parameters. Direct addition of the ranking terms is then performed to integrate all information into a single decision variable. This approach is incorporated into a (μλ) evolution strategy and tested on thirteen benchmark problems, one engineering design problem, and five difficult problems with a high dimensionality or many constraints. The performance of the proposed strategy is similar to that of the stochastic ranking method when dealing with inequality constraints, but it has a much simpler ranking procedure and does not require any tuning parameters. A percentage-based tolerance value adjustment scheme is also proposed to enable feasible search when dealing with equality constraints.  相似文献   

15.
针对单回声状态网络难以充分描述数据信息的问题,提出多稀疏回声状态网络预测模型.通过对相关回声状态网络的组合权值及由相关样本得到的基函数的权值同时进行学习,获得优化的多个稀疏回声状态网络组合模型.所提模型不同于双稀疏相关向量机等多核学习模型,它不需要选择特定的核函数及相应的核参数.因此,该模型不但能更好的描述数据信息,避免了双稀疏相关向量机及其他多核学习中核函数及其参数不易选择的问题.同时,所提模型不需要采用交叉验证的方式确定回声状态网络的谱半径和稀疏度,只需确定相应的区间.本文通过两组标杆数据和一组实际数据仿真实验,与传统回声状态网络方法相比,验证了所提模型具有更好的预测性能.  相似文献   

16.
In this paper, we consider a distributed resource allocation problem of minimizing a global convex function formed by a sum of local convex functions with coupling constraints. Based on neighbor communication and stochastic gradient, a distributed stochastic mirror descent algorithm is designed for the distributed resource allocation problem. Sublinear convergence to an optimal solution of the proposed algorithm is given when the second moments of the gradient noises are summable. A numerical example is also given to illustrate the eff ectiveness of the proposed algorithm.  相似文献   

17.
目前国际上对动态优化问题中的状态变量路径约束已有一些研究,但专门处理控制变量路径约束的方法却鲜见报道. 本文首先介绍两种分别基于三角函数变换、约束算子截断来处理控制变量路径约束的方法,然后提出一种基于光滑化的二次罚函数方法. 光滑化罚函数方法不仅能够处理控制变量路径约束,而且还能同时处理关于状态变量的路径约束. 最后使用目前流行的控制变量参数化 (Control variable parameterization, CVP)策略对最终获得的、不再含控制变量路径约束的动态优化问题求解. 实例测试一展现了三种方法各自的特点;实例测试二表明了光滑罚函数方法的有效性和优越性.  相似文献   

18.
免疫遗传算法除了具有简单遗传算法的全局寻优能力外,还具有免疫记忆、免疫调节及多样性保持功能。梯度下降算法训练神经网络收敛速度慢,容易陷入局部最优,且受初始值的影响较大。本文综合两种方法的优点,提出一种用免疫遗传算法结合梯度下降算法的组合训练方法,用于RBF网的训练,并通过实验证明所提出的组合算法比简单遗传算法结合梯度下降组合算法的速度更快并且最终误差更小。  相似文献   

19.
Inaccurate input-output gains (partial derivatives of outputs with respect to inputs) are common in neural network models when input variables are correlated or when data are incomplete or inaccurate. Accurate gains are essential for optimization, control, and other purposes. We develop and explore a method for training feedforward neural networks subject to inequality or equality-bound constraints on the gains of the learned mapping. Gain constraints are implemented as penalty terms added to the objective function, and training is done using gradient descent. Adaptive and robust procedures are devised for balancing the relative strengths of the various terms in the objective function, which is essential when the constraints are inconsistent with the data. The approach has the virtue that the model domain of validity can be extended via extrapolation training, which can dramatically improve generalization. The algorithm is demonstrated here on artificial and real-world problems with very good results and has been advantageously applied to dozens of models currently in commercial use.  相似文献   

20.
针对二阶多智能体系统中的分布式资源分配问题, 本文设计两种连续时间算法. 基于KKT (Karush?Kuhn?Tucker, 卡罗需?库恩?塔克)优化条件, 第一种控制算法利用节点局部不等式及其梯度信息来约束节点状态. 与上述梯度方法不同, 第二种控制算法包括一致性梯度下降法和固定时间收敛映射算子, 其中固定时间收敛映射算子确保算法的节点状态在固定时间收敛到局部约束集, 一致性梯度下降法目的是确保节点迭代到资源分配问题最优解. 两种控制算法都对状态无初始值约束, 且控制参数都是常数. 利用凸优化理论和固定时间李雅普诺夫方法, 分别分析了上述控制策略在有向平衡网络条件下的渐近和指数收敛性. 最后通过数值仿真验证了所设计算法在一维和高维资源分配问题的有效性.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号