首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
填充函数法是一种求解多变量、多极值函数全局最优化的有效方法,这种方法的关键是构造填充函数。为此文中根据文献[1]的思想,考虑优化问题minf(x)x∈R^n,针对f(x)为局部Lipschirz连续函数,构造了一种简单的单填充函数,容易证明相对于传统的填充函数,该填充函数在参数较小时就能保持其填充性质,且全局收敛速度快。根据这个填充函数还提出了一个求解无约束优化问题的填充函数算法,对4个基准测试函数的数值试验表明该方法是有效的。  相似文献   

2.
刘杰  王宇平 《软件学报》2013,24(10):2267-2274
为求出具有箱式约束的非线性全局优化问题所有的局部极小点,提出了一种基于Multistart 方法的新算法.结合目标函数在可行域内的总变差、下降率和凹凸性等信息,构造了一个刻划局部极小点分布的G-度量.将可行域剖分为若干个小区域,把初始点按G-度量值的比例分配在每块区域上,使得局部极小点密集的区域能够被分配较多的初始点进行搜索;给出了有效初始点的判断条件为了进一步减少局部优化算法的运行次数.针对G-度量计算量较大的问题,设计了相应的近似计算方法,降低了计算量.选择了4 个2 维~10 维具有大量局部极小点的测试函数进行求解,与Multisatart 和Minfinder 算法的实验结果进行对比,表明了该方法在收敛速度和搜索全部局部极小点上都有了较大的改进和提高.  相似文献   

3.
本文研究了一类分布式优化问题,其目标是通过局部信息交换使由局部成本函数之和构成的全局成本函数最小.针对无向连通图,我们提出了两种基于比例积分策略的分布式优化算法.在局部成本函数可微且凸的条件下,证明了所提算法渐近收敛到全局最小值点.更进一步,在局部成本函数具有局部Lipschitz梯度和全局成本函数关于全局最小值点是有...  相似文献   

4.
Many real-life optimization problems often face an increased rank of nonsmoothness (many local minima) which could prevent a search algorithm from moving toward the global solution. Evolution-based algorithms try to deal with this issue. The algorithm proposed in this paper is called GAAPI and is a hybridization between two optimization techniques: a special class of ant colony optimization for continuous domains entitled API and a genetic algorithm (GA). The algorithm adopts the downhill behavior of API (a key characteristic of optimization algorithms) and the good spreading in the solution space of the GA. A probabilistic approach and an empirical comparison study are presented to prove the convergence of the proposed method in solving different classes of complex global continuous optimization problems. Numerical results are reported and compared to the existing results in the literature to validate the feasibility and the effectiveness of the proposed method. The proposed algorithm is shown to be effective and efficient for most of the test functions.  相似文献   

5.
基于变换函数与填充函数的模糊粒子群优化算法   总被引:1,自引:0,他引:1  
本文提出了一种基于变换函数与填充函数的模糊粒子群优化算法(Fuzzy partical swarm optimization based on filled function and transformation function,FPSO-TF).以基于不同隶属度函数的多回路模糊控制系统为基础,进一步结合变换函数与填充函数,使该算法减少了陷入局部最优的可能,又可以跳出局部极小值点至更小的点,快速高效地搜索到全局最优解.最后采用基准函数对此算法进行测试,并与几种不同类型的改进算法进行对比分析,验证了此算法的有效性与优越性.  相似文献   

6.
提出基于线性搜索的混沌优化方法,利用混沌变量的特定内在随机性和遍历性来跳出局部最优点,而线性搜索可以提高局部空间的搜索速度和精度。结合精确不可微罚函数求解非线性约束优化问题。仿真结果表明,该算法简单易行,求解精度、收敛速度和可靠性较高,是解决优化问题一种有效方法。  相似文献   

7.
Many solutions in geotechnical problems are the result of optimization analysis. There are many practical engineering problems where the objective function is non-convex, discontinuous with the presence of multiple strong local minima, and the classical optimization methods may sometime be trapped by the local minimum during the analysis. In this paper, a coupled optimization method is proposed for such difficult cases. The mixed optimization algorithm can takes the advantage of different algorithms, and the proposed algorithm is demonstrated to be effective and efficient in solving a very complicated hydropower problem with a high level of confidence. The proposed method can further be applied to different kinds of difficult engineering problems.  相似文献   

8.
Training a neural network is a difficult optimization problem because of numerous local minima. Many global search algorithms have been used to train neural networks. However, local search algorithms are more efficient with computational resources, and therefore numerous random restarts with a local algorithm may be more effective than a global algorithm. This study uses Monte-Carlo simulations to determine the efficiency of a local search algorithm relative to nine stochastic global algorithms when using a neural network on function approximation problems. The computational requirements of the global algorithms are several times higher than the local algorithm and there is little gain in using the global algorithms to train neural networks. Since the global algorithms only marginally outperform the local algorithm in obtaining a lower local minimum and they require more computational resources, the results in this study indicate that with respect to the specific algorithms and function approximation problems studied, there is little evidence to show that a global algorithm should be used over a more traditional local optimization routine for training neural networks. Further, neural networks should not be estimated from a single set of starting values whether a global or local optimization method is used.  相似文献   

9.
填充函数法是求解非线性全局优化问题的有效方法。针对无约束优化问题,在目标函数及其梯度利普希兹连续的基础上,提出了一个新的连续可微的单参数填充函数,并研究了该填充函数的相关性质。最后,给出了一个填充函数算法,数值实验表明,该填充函数是有效的且算法是可行的。  相似文献   

10.
Stochastic optimization algorithms like genetic algorithms (GAs) and particle swarm optimization (PSO) algorithms perform global optimization but waste computational effort by doing a random search. On the other hand deterministic algorithms like gradient descent converge rapidly but may get stuck in local minima of multimodal functions. Thus, an approach that combines the strengths of stochastic and deterministic optimization schemes but avoids their weaknesses is of interest. This paper presents a new hybrid optimization algorithm that combines the PSO algorithm and gradient-based local search algorithms to achieve faster convergence and better accuracy of final solution without getting trapped in local minima. In the new gradient-based PSO algorithm, referred to as the GPSO algorithm, the PSO algorithm is used for global exploration and a gradient based scheme is used for accurate local exploration. The global minimum is located by a process of finding progressively better local minima. The GPSO algorithm avoids the use of inertial weights and constriction coefficients which can cause the PSO algorithm to converge to a local minimum if improperly chosen. The De Jong test suite of benchmark optimization problems was used to test the new algorithm and facilitate comparison with the classical PSO algorithm. The GPSO algorithm is compared to four different refinements of the PSO algorithm from the literature and shown to converge faster to a significantly more accurate final solution for a variety of benchmark test functions.  相似文献   

11.
一种新的全局优化演化算法   总被引:3,自引:0,他引:3  
演化算法在求解大型复杂多极值问题的过程中经常容易陷入局部最优,该文提出了一种变换目标函数法来消除早熟收敛。当演化算法检测出局部最优点时,使用填充函数构造变换目标函数,将局部极小点及其邻域提升,保留整体最小值点。从而新方法具有消除局部最优点而保留整体最优点的功能。通过对复杂的无约束优化问题和有约束优化问题的实验,结果显示了新方法具有搜索全局最优解的良好性能。  相似文献   

12.
During past decades, the role of optimization has steadily increased in many fields. It is a hot problem in research on control theory. In practice, optimization problems become more and more complex. Traditional algorithms cannot solve them satisfactorily. Either they are trapped to local minima or they need much more search time. Chaos often exists in nonlinear systems. It has many good properties such as ergodicity, stochastic properties, and ''regularity.'' A chaotic motion can go nonrepeatedly through every state in a certain domain. By use of these properties of chaos, an effective optimization method is proposed: the chaos optimization algorithm COA . With chaos search, some complex optimization problems are solved very well. The test results illustrate that the efficiency of COA is much higher than that of some stochastic algorithms such as the simulated annealing algorithm SAA and chemotaxis algorithm CA , which are often used to optimize complex problems. The chaos optimization method provides a new and efficient way to optimize kinds of complex problems with continuous variables.  相似文献   

13.
A memory-based simulated annealing algorithm is proposed which fundamentally differs from the previously developed simulated annealing algorithms for continuous variables by the fact that a set of points rather than a single working point is used. The implementation of the new method does not need differentiability properties of the function being optimized. The method is well tested on a range of problems classified as easy, moderately difficult and difficult. The new algorithm is compared with other simulated annealing methods on both test problems and practical problems. Results showing an improved performance in finding the global minimum are given.Scope and purposeThe inherent difficulty of global optimization problems lies in finding the very best optimum (maximum or minimum) from a multitude of local optima. Many practical global optimization problems of continuous variables are non-differentiable and noisy and even the function evaluation may involve simulation of some process. For such optimization problems direct search approaches are the methods of choice. Simulated annealing is a stochastic global optimization algorithm, initially designed for combinatorial (discrete) optimization problems. The algorithm that we propose here is a simulated annealing algorithm for optimization problems involving continuous variables. It is a direct search method. The strengths of the new algorithm are: it does not require differentiability or any other properties of the function being optimized and it is memory-based. Therefore, the algorithm can be applied to noisy and/or not exactly known functions. Although the algorithm is stochastic in nature, it can memorise the best solution. The new simulated annealing algorithm has been shown to be reliable, fast, general purpose and efficient for solving some difficult global optimization problems.  相似文献   

14.
杨涛  常怡然  张坤朋  徐磊 《控制与决策》2023,38(8):2364-2374
考虑一类分布式优化问题,其目标是通过局部信息交互,使得局部成本函数之和构成的全局成本函数最小.针对该类问题,通过引入时基发生器(TBG),提出两种基于预设时间收敛的分布式比例积分(PI)优化算法.与现有的基于有限/固定时间收敛的分布式优化算法相比,所提出算法的收敛时间不依赖于系统的初值和参数,且可以任意预先设计.此外,在全局成本函数关于最优值点有限强凸,局部成本函数为可微的凸函数,且具有局部Lipschitz梯度的条件下,通过Lyapunov理论证明了所提算法都能实现预设时间收敛.最后,通过数值仿真验证了所提出算法的有效性.  相似文献   

15.
胡劲松  郑启伦 《计算机学报》2012,35(2):2193-2201
给出一种新的优化算法:球隙迁移法.该方法不是已有方法的融合或改进,它利用搜索过程中积累的极小点分布信息形成球隙,以此启发、指导后来的搜索区域,不但逃离了当前局部极小,还能有效地避免重复历史上的多个局部极小.目前的智能算法中,勘探和开采行为相耦合,球隙法实现了勘探与开采的分离,避免了相互干扰,减小了代价,对变量耦合对象的优化效果好.文中证明了球隙法能在有限计算次数内确定地找到连续函数的全局最优.  相似文献   

16.
神经网络增强学习的梯度算法研究   总被引:11,自引:1,他引:11  
徐昕  贺汉根 《计算机学报》2003,26(2):227-233
针对具有连续状态和离散行为空间的Markov决策问题,提出了一种新的采用多层前馈神经网络进行值函数逼近的梯度下降增强学习算法,该算法采用了近似贪心且连续可微的Boltzmann分布行为选择策略,通过极小化具有非平稳行为策略的Bellman残差平方和性能指标,以实现对Markov决策过程最优值函数的逼近,对算法的收敛性和近似最优策略的性能进行了理论分析,通过Mountain-Car学习控制问题的仿真研究进一步验证了算法的学习效率和泛化性能。  相似文献   

17.
The filled function method (FFM) is an approach to find the global minimizer of multi-modal functions. The numerical applicability of conventional filled functions is limited as they are defined on either exponential or logarithmic terms. This paper proposes a new filled function that does not have such disadvantages. An algorithm is presented according to the theoretical analysis. A computer program is designed, implemented, and tested. Numerical experiments on typical testing functions show that the new approach is superior to the conventional one. The result of optimization design for an electrical machine is also reported.Scope and purposeIn the context of mathematical programming, global optimization is concerned with the theory and algorithms on minima of multi-modal functions. In general, global optimization approaches can be classified into two categories: probabilistic and deterministic. The former can usually be applied to general multi-modal functions, whereas the latter typically concentrates on some particular classes of functions. The filled function method is one of a few deterministic approaches which intend to find the global minimum for general multi-modal functions. However, the numerical performance of conventional filled functions is undesirable as they are defined on either exponential or logarithmic terms or multiple parameters. This paper proposes a new filled function that does not have the above disadvantages. The present work consists of theoretical analysis, algorithm design, computer implementation, mathematical validation, and engineering application.  相似文献   

18.
一种基于正交设计的快速差分演化算法及其应用研究   总被引:1,自引:0,他引:1  
为了进一步加快差分演化算法的速度和增强算法的鲁棒性,提出了一种基于正交设计的快速差分演化算法,并把它应用于函数优化问题的求解中.新算法在保持传统差分演化算法的简单、有效等特性的同时,具有以下特征:1)采用基于正交设计的杂交算子,并结合直观统计法产生最优子个体;2)采用决策变量分块策略,以减少正交实验次数,加快算法收敛速度;3)提出一种基于非凸理论的多父体混合自适应杂交变异算子,以增强算法的非凸搜索能力和自适应能力;4)简化基本差分演化算法的缩放因子,尽量减少算法的控制参数,方便工程人员的使用.通过对12个标准测试函数进行实验,并与其他演化算法的结果相比较,其结果表明,新算法在解的精度、稳定性和收敛性上表现出很好的性能.  相似文献   

19.
高维化工数据共轭粒子群算法处理   总被引:1,自引:0,他引:1  
针对化工数据多为高维数据,而粒子群算法对求解高维优化问题易陷局部极值,提出将共轭方向法与粒子群算法相结合处理高维数据.当粒子群算法迭代了一定步数而陷入局部极值并得局部最优解χ*时,以χ*为初值,用共轭方向法对其求解,利用粒子群算法对低维优化问题的有效性,将得新的更优的当前最优解χ**,从而使算法跳出局部极值;在新极值的条件下,又用粒子群算法对原问题求解,如此反复直至结束.通过经典的测试函数对其测试,结果表明这一尝试是有效的.最后将算法用于SO2催化氧化反应动力学模型的非线性参数估计,获得满意效果.  相似文献   

20.
一个基于填充函数变换的对称TSP问题的局部搜索算法   总被引:13,自引:1,他引:13  
该文提出了求对称TSP问题近优解的填充函数算法。首先,在用局部搜索算法求得对称TSP问题的一个局部极小解后,对该问题作填充函数变换得到一新的组合优化问题,新问题的局部极小解和最优解分别是原问题的局部极小解和最优解,而且在对称TSP问题的目标函数值大于或等于其目标函数当前极小值的区域中,新问题只有一个已知的局部极小解。随后用局部搜索算法求新问题的一个局部极小解,它或者是已知的局部极小解,或者是对称TSP问题的更好的局部极小解。对多个标准实例的计算试验表明,该文所构造的算法优于直接求解对称TSP问题的局部搜索算法。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号