首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 218 毫秒
1.
非光滑伪凸优化问题广泛应用于科学及工程等领域,属于一类特殊的非凸优化问题,具有重要的研究意义.针对含有等式约束和不等式约束条件的非光滑伪凸优化问题,该文提出了一种新的神经动力学方法,并引入罚函数和正则化思想.通过有效的罚函数保证了所提出的神经网络状态的有界性,从而保证神经网络的状态解在有限时间内进入可行域中,最终收敛到原问题的最优解.最后,用两个数值实验验证了所提出模型的有效性.与现有的神经网络相比,该文的模型有以下优势:避免预先计算精确的惩罚因子,初始点的选取无特殊要求,结构简单.  相似文献   

2.
带有线性不等式约束的非光滑非优化问题被广泛应用于稀疏优化,具有重要的研究价值.为了解决这类问题,提出了一种基于光滑化和微分包含理论的神经网络模型.通过理论分析,证明了所提神经网络的状态解全局存在,轨迹能够在有限时间进入可行域并永驻其中,且任何聚点都是目标优化问题的广义稳定点.最后给出数值实验和图像复原实验验证神经网络在理论和应用中的有效性.与现有神经网络相比,它具有以下优势:初始点可以任意选取;避免计算精确罚因子;无需求解复杂的投影算子.  相似文献   

3.
提出了解决一类带等式与不等式约束的非光滑非凸优化问题的神经网络模型。证明了当目标函数有下界时,神经网络的解轨迹在有限时间收敛到可行域。同时,神经网络的平衡点集与优化问题的关键点集一致,且神经网络最终收敛于优化问题的关键点集。与传统基于罚函数的神经网络模型不同,提出的模型无须计算罚因子。最后,通过仿真实验验证了所提出模型的有效性。  相似文献   

4.
针对不等式约束条件下,目标函数和约束条件中含有参数的线性规划问题,提出一种基于新型光滑精确罚函数的神经网络计算方法.引入误差函数构造单位阶跃函数的近似函数,给出一种更加精确地逼近于Ll精确罚函数的光滑罚函数,讨论了其基本性质;利用所提光滑精确罚函数建立了求解参数线性规划问题的神经网络模型,证明了该网络模型的稳定性和收敛性,并给出了详细的算法步骤.数值仿真验证了所提方法具有罚因子取值小、结构简单、计算精度高等优点.  相似文献   

5.
递归神经网络方法解决非光滑伪凸优化问题   总被引:1,自引:0,他引:1  
针对目标函数为非光滑伪凸函数且带有等式约束和不等式约束的优化问题,基于罚函数以及微分包含的思想,构建一个层次仅为一层且不包含惩罚算子的新型递归神经网络模型。该模型不用提前计算惩罚参数,能够很好地收敛。理论证明全局解存在,模型的状态解能够在有限的时间内进到原目标函数的可行域并不再离开,其状态解最终收敛到目标函数的一个最优解。仿真实验证实了理论结果的可行性。  相似文献   

6.
为寻求满足约束条件的优化问题的最优解,针对目标函数是非李普西茨函数,可行域由线性不等式或非线性不等式约束函数组成的区域的优化问题,构造了一种光滑神经网络模型。此模型通过引进光滑逼近技术将目标函数由非光滑函数转换成相应的光滑函数以及结合惩罚函数方法所构造而成。通过详细的理论分析证明了不论初始点在可行域内还是在可行域外,光滑神经网络的解都具有一致有界性和全局性,以及光滑神经网络的任意聚点都是原始优化问题的稳定点等结论。最后通过几个简单的仿真实验证明了理论的正确性。  相似文献   

7.
针对约束条件中含有参数的非线性规划问题,提出一种基于L1精确罚函数神经网络的新型计算方法。该方法的罚因子为有限实数,并且取值小,便于硬件实现。在改进现有网络模型的基础上,利用最速下降原理构造了神经网络的动力学方程。给出所提神经网络模型在优化计算方面的具体应用步骤。最后,通过数值实例进行仿真验证,结果表明所提方法能够更加快速、精准地收敛于原规划问题的最优解。  相似文献   

8.
解约束优化问题的一种新的罚函数模型   总被引:2,自引:1,他引:1  
罚函数法是进化算法中解决约束优化问题最常用的方法之一,它通过对不可行解进行惩罚使得搜索逐步进入可行域.罚函数常定义为目标函数与惩罚项之和,其缺陷一方面在于此模型的罚因子难以控制,另一方面当目标函数值与惩罚项的函数值的差值很大时,此模型不能有效地区分可行解与不可行解,从而不能有效处理约束.为了克服这些缺点,首先引入了目标满意度函数与约束满意度函数,前者是根据目标函数对解的满意度给出的一个度量,而后者是根据约束违反度对解的满意度给出的一个度量.然后将两者有机结合,定义了一种新的罚函数,给出了一种新的罚函数模型.并且设置了自适应动态罚因子,其随着当前种群质量和进化代数的改变而改变.因此它很易于控制.进一步设计了新的杂交和变异算子,在此基础上提出了解决约束优化问题的一种新的进化算法.通过对6个常用标准测试函数所作的数据仿真实验表明,提出的算法是十分有效的.  相似文献   

9.
物流中心选址算法改进及其Hopfield神经网络设计   总被引:1,自引:0,他引:1  
在分析物流中心选址传统算法的基础上,引入一种新的选址模型,该模型能减少决策变量和约束条件的个数.利用该模型设计了一种Hopfield神经网络,将约束合并进网络结构从而将罚函数从能量函数中消除,使得网络的运行时间显著降低.为物流中心选址优化提供了一种新的方法.  相似文献   

10.
基于精确罚函数的一类广义非线性神经网络模型   总被引:3,自引:0,他引:3  
针对一般的非线性优化问题定义了一种2次非线性罚函数,证明了在一定条件下对 应的罚优化问题的精确罚定理,由此引进了一种广义非线性神经网络模型,并证明了这种网络 的平衡点与能量函数之间的联系,在一定条件下对应的平衡点收敛到原问题的最优解.这种神 经网络模型对于求解许多优化问题具有重要的作用.  相似文献   

11.
In this paper we discuss neural network approach for allocation with capacity constraints problem. This problem can be formulated as zero-one integer programming problem. We transform this zero-one integer programming problem into an equivalent nonlinear programming problem by replacing zero-one constraints with quadratic concave equality constraints. We propose two kinds of neural network structures based on penalty function method and augmented Lagrangian multiplier method, and compare them by theoretical analysis and numerical simulation. We show that penalty function based neural network approach is not good to combinatorial optimization problem because it falls in the dilemma whether terminating at an infeasible solution or sticking at any feasible solution, and augmented Lagrangian multiplier method based neural network can alleviate this suffering in some degree.  相似文献   

12.
This paper presents a continuous-time recurrent neural-network model for nonlinear optimization with any continuously differentiable objective function and bound constraints. Quadratic optimization with bound constraints is a special problem which can be solved by the recurrent neural network. The proposed recurrent neural network has the following characteristics. 1) It is regular in the sense that any optimum of the objective function with bound constraints is also an equilibrium point of the neural network. If the objective function to be minimized is convex, then the recurrent neural network is complete in the sense that the set of optima of the function with bound constraints coincides with the set of equilibria of the neural network. 2) The recurrent neural network is primal and quasiconvergent in the sense that its trajectory cannot escape from the feasible region and will converge to the set of equilibria of the neural network for any initial point in the feasible bound region. 3) The recurrent neural network has an attractivity property in the sense that its trajectory will eventually converge to the feasible region for any initial states even at outside of the bounded feasible region. 4) For minimizing any strictly convex quadratic objective function subject to bound constraints, the recurrent neural network is globally exponentially stable for almost any positive network parameters. Simulation results are given to demonstrate the convergence and performance of the proposed recurrent neural network for nonlinear optimization with bound constraints.  相似文献   

13.
In this paper, a time-varying two-phase (TVTP) optimization neural network is proposed based on the two-phase neural network and the time-varying programming neural network. The proposed TVTP algorithm gives exact feasible solutions with a finite penalty parameter when the problem is a constrained time-varying optimization. It can be applied to system identification and control where it has some constraints on weights in the learning of the neural network. To demonstrate its effectiveness and applicability, the proposed algorithm is applied to the learning of a neo-fuzzy neuron model.  相似文献   

14.
Neural network for quadratic optimization with bound constraints   总被引:20,自引:0,他引:20  
A recurrent neural network is presented which performs quadratic optimization subject to bound constraints on each of the optimization variables. The network is shown to be globally convergent, and conditions on the quadratic problem and the network parameters are established under which exponential asymptotic stability is achieved. Through suitable choice of the network parameters, the system of differential equations governing the network activations is preconditioned in order to reduce its sensitivity to noise and to roundoff errors. The optimization method employed by the neural network is shown to fall into the general class of gradient methods for constrained nonlinear optimization and, in contrast with penalty function methods, is guaranteed to yield only feasible solutions.  相似文献   

15.
A new gradient-based neural network is constructed on the basis of the duality theory, optimization theory, convex analysis theory, Lyapunov stability theory, and LaSalle invariance principle to solve linear and quadratic programming problems. In particular, a new function F(x, y) is introduced into the energy function E(x, y) such that the function E(x, y) is convex and differentiable, and the resulting network is more efficient. This network involves all the relevant necessary and sufficient optimality conditions for convex quadratic programming problems. For linear programming and quadratic programming (QP) problems with unique and infinite number of solutions, we have proven strictly that for any initial point, every trajectory of the neural network converges to an optimal solution of the QP and its dual problem. The proposed network is different from the existing networks which use the penalty method or Lagrange method, and the inequality constraints are properly handled. The simulation results show that the proposed neural network is feasible and efficient.  相似文献   

16.
自适应惩罚策略及其在交通信号优化中的应用   总被引:2,自引:1,他引:1       下载免费PDF全文
针对约束优化问题的求解,设计了一种处理约束条件的自适应惩罚策略,用于将具有不等式约束和等式约束的优化问题转变为仅包含决策变量上、下限约束的优化问题。该策略通过引入约束可行测度、可行度的概念来描述决策变量服从于不等式约束和等式约束的程度,并以此构造处理约束条件的自适应惩罚函数,惩罚值随着约束可行度的变化而动态自适应地改变。为了检验该惩罚策略的有效性,针对单路口交通信号优化问题进行了应用研究,并用三种不同算法进行了大量的仿真计算,结果表明所设计的自适应策略在具有高度约束条件的城市交通信号优化问题中具有良好的效果。  相似文献   

17.
Self-organizing adaptive penalty strategy in constrained genetic search   总被引:1,自引:0,他引:1  
This research aims to develop an effective and robust self-organizing adaptive penalty strategy for genetic algorithms to handle constrained optimization problems without the need to search for appropriate values of penalty factors for the given optimization problem. The proposed strategy is based on the idea that the constrained optimal design is almost always located at the boundary between feasible and infeasible domains. This adaptive penalty strategy automatically adjusts the value of the penalty parameter used for each of the constraints according to the ratio between the number of designs violating the specific constraint and the number of designs satisfying the constraint. The goal is to maintain equal numbers of designs on each side of the constraint boundary so that the chance of locating their offspring designs around the boundary is maximized. The new penalty function is self-defining and no parameters need to be adjusted for objective and constraint functions in any given problem. This penalty strategy is tested and compared with other known penalty function methods in mathematical and structural optimization problems, with favorable results.  相似文献   

18.
A higher order version of the Hopfield neural network is presented which will perform a simple vector quantisation or clustering function. This model requires no penalty terms to impose constraints in the Hopfield energy, in contrast to the usual one where the energy involves only terms quadratic in the state vector. The energy function is shown to have no local minima within the unit hypercube of the state vector so the network only converges to valid final states. Optimisation trials show that the network can consistently find optimal clusterings for small, trial problems and near optimal ones for a large data set consisting of the intensity values from a digitised, grey- level image.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号