首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Generalized hill climbing (GHC) algorithms provide a well-defined framework for describing the performance of local search algorithms for discrete optimization problems. Necessary and sufficient convergence conditions for GHC algorithms are presented. These convergence conditions are derived using a new iteration classification scheme for GHC algorithms. The implications of the necessary and the sufficient convergence conditions fur GHC algorithms with respect to existing convergence theory for simulated annealing are also discussed  相似文献   

2.
Fuzzy c-means (FCM) clustering algorithms have been widely used to solve clustering problems. Yang and Yu [1] extended these to optimization procedures with respect to any probability distribution. They showed that the optimal cluster centers are the fixed points of these generalized FCM clustering algorithms. The convergence properties of algorithms are the important theoretical issue. In this paper, we present convergence properties of the generalized FCM clustering algorithms. These are global convergence, local convergence, and its rate of convergence.  相似文献   

3.
王东  吴湘滨 《微机发展》2006,16(9):18-20
文中根据遗传算法理论分析了遗传编程中种群多样性对算法收敛特性的影响,提出了一种可行的种群多样性跟踪评测方法,同时提出了优选父代个体的改进方法。以求解旅行商问题为例,通过统计性实验数据验证了改进后的算法较采用同样局部优化的常规遗传算法具有更好的收敛速度和优化解,同时也对改进后算法的相关控制参数选择进行了实验分析,结论为改进算法能获得更好的收敛性能。  相似文献   

4.
针对基本遗传算法具有早熟性收敛、寻优时间长及局部搜索能力差的问题,分析产生这些问题的原因。结合最优保存策略和移民策略,提出基于种群平均适应度信息的遗传算法自适应算子的改进方案,并对改进遗传算法的收敛性予以证明。仿真结果表明,改进遗传算法在搜索效率、搜索精度和克服早熟收敛现象方面均有明显的优越性。  相似文献   

5.
There have been various algorithms designed for simulating natural evolution. This paper proposes a new simulated evolutionary computation model called the abstract evolutionary algorithm (AEA), which unifies most of the currently known evolutionary algorithms and describes the evolution as an abstract stochastic process composed of two fundamental operators: selection and evolution operators. By axiomatically characterizing the properties of the fundamental selection and evolution operators, several general convergence theorems and convergence rate estimations for the AEA are established. The established theorems are applied to a series of known evolutionary algorithms, directly fielding new convergence conditions and convergence rate estimations of various specific genetic algorithms and evolutionary strategies. The present work provides a significant step toward the establishment of a unified theory of simulated evolutionary computation  相似文献   

6.
Multigrid methods have been proven to be an efficient approach in accelerating the convergence rate of numerical algorithms for solving partial differential equations. This paper investigates whether multigrid methods are helpful to accelerate the convergence rate of evolutionary algorithms for solving global optimization problems. A novel multigrid evolutionary algorithm is proposed and its convergence is proven. The algorithm is tested on a set of 13 well-known benchmark functions. Experiment results demonstrate that multigrid methods can accelerate the convergence rate of evolutionary algorithms and improve their performance.  相似文献   

7.
为提高super twising算法的收敛速度,解决现有算法存在的增益过估计问题,提出了两种自适应增益快速super twisting算法.分别通过快速终端滑模趋近律和增加线性项加快收敛速度.利用基于等效控制的双层自适应律调节增益,保证滑模存在条件的成立,同时使增益尽量的小.采用Lyapunov方法证明了新算法具有更优良的收敛特性,根据有界实引理和Schur补定理分别给出了两种算法的参数整定策略.仿真算例表明,在相同控制参数下,新算法的能耗与原算法接近,并具有更快的收敛速度和更强的鲁棒性.  相似文献   

8.
一类自适应蚁群算法及其收敛性分析   总被引:4,自引:4,他引:4  
为了克服蚁群算法易陷入局部最小点的缺点,同时提高算法的收敛速度,提出一类自适应蚁群算法.该算法利用自适应改变信息激素的挥发系数改善传统蚁群算法的全局搜索能力和收敛速度.通过马尔科夫过程对算法的全局收敛性进行分析,得出该类蚁群算法全局收敛性条件.并构造出该类算法的一种信息激素更新策略,证明了这种算法全局收敛性.利用提出的算法对典型的TSP问题进行仿真研究,结果表明比典型蚁群算法在收敛速度和解的性能上都有较大改善.  相似文献   

9.
针对遗传算法所存在的早熟和收敛速度慢等问题,基于低等生物的分裂生殖现象,提出了分裂算子的概念,并将该算子引入到传统遗传算法和自适应遗传算法中,对这两种遗传算法进行了改进。通过一系列多峰函数测试实验,将改进算法分别与基本遗传算法和自适应遗传算法进行比较,证明引入分裂算子后的遗传算法和自适应遗传算法不仅有效地收敛到全局最优解,而且提高了收敛速度。  相似文献   

10.
This paper investigates new learning algorithms (LF I and LF II) based on Lyapunov function for the training of feedforward neural networks. It is observed that such algorithms have interesting parallel with the popular backpropagation (BP) algorithm where the fixed learning rate is replaced by an adaptive learning rate computed using convergence theorem based on Lyapunov stability theory. LF II, a modified version of LF I, has been introduced with an aim to avoid local minima. This modification also helps in improving the convergence speed in some cases. Conditions for achieving global minimum for these kind of algorithms have been studied in detail. The performances of the proposed algorithms are compared with BP algorithm and extended Kalman filtering (EKF) on three bench-mark function approximation problems: XOR, 3-bit parity, and 8-3 encoder. The comparisons are made in terms of number of learning iterations and computational time required for convergence. It is found that the proposed algorithms (LF I and II) are much faster in convergence than other two algorithms to attain same accuracy. Finally, the comparison is made on a complex two-dimensional (2-D) Gabor function and effect of adaptive learning rate for faster convergence is verified. In a nutshell, the investigations made in this paper help us better understand the learning procedure of feedforward neural networks in terms of adaptive learning rate, convergence speed, and local minima.  相似文献   

11.
The authors develop persistence-of-excitation conditions for the exponential convergence of continuous-time adaptive algorithms. Exponential convergence is important for robustness. Adaptive algorithms without such convergence can behave unacceptably in the presence of modeling inadequacies. Conditions for convergence are usually framed as spanning conditions on a regressor vector involving the output of the unknown system. In this study the authors translate these conditions into ones involving the system input only  相似文献   

12.
This article focuses on gradient-based backpropagation algorithms that use either a common adaptive learning rate for all weights or an individual adaptive learning rate for each weight and apply the Goldstein/Armijo line search. The learning-rate adaptation is based on descent techniques and estimates of the local Lipschitz constant that are obtained without additional error function and gradient evaluations. The proposed algorithms improve the backpropagation training in terms of both convergence rate and convergence characteristics, such as stable learning and robustness to oscillations. Simulations are conducted to compare and evaluate the convergence behavior of these gradient-based training algorithms with several popular training methods.  相似文献   

13.
Three abstract optimization problems are presented along with doubly iterative algorithms for their numerical solution. These algorithms are generalizations of particular algorithms described by Barr and Gilbert [19], [21] and Fujisawa and Yasuda [22]. The supporting theory is fully developed along with proofs of convergence. Practical aspects of computations are considered and procedures which insure rapid convergence are discussed. Two applications to discrete-time optimal control problems are described.  相似文献   

14.
针对鲸鱼优化算法(whale optimization algorithm ,WOA)容易陷入局部最优和收敛精度低的问题进行了研究,提出一种改进的鲸鱼优化算法(IWOA)。该算法通过准反向学习方法来初始化种群,提高种群的多样性;然后将线性收敛因子修改为非线性收敛因子,有利于平衡全局搜索和局部开发能力;另外,通过增加自适应权重改进鲸鱼优化算法的局部搜索能力,提高收敛精度;最后,通过随机差分变异策略及时调整鲸鱼优化算法,避免陷入局部最优。实验选取九个基准函数,所有算法均迭代30次,结果表明:改进的鲸鱼优化与原鲸鱼优化算法以及五种改进的鲸鱼优化算法相比,其均值和标准差均优于其他算法,收敛曲线也优于其他大多数算法。说明改进的鲸鱼优化算法收敛精度和算法稳定性最佳,收敛速度较其他大多数改进的鲸鱼优化算法明显加快。  相似文献   

15.
Ant colony optimization (ACO) has widely been applied to solve combinatorial optimization problems in recent years. There are few studies, however, on its convergence time, which reflects how many iteration times ACO algorithms spend in converging to the optimal solution. Based on the absorbing Markov chain model, we analyze the ACO convergence time in this paper. First, we present a general result for the estimation of convergence time to reveal the relationship between convergence time and pheromone rate. This general result is then extended to a two-step analysis of the convergence time, which includes the following: 1) the iteration time that the pheromone rate spends on reaching the objective value and 2) the convergence time that is calculated with the objective pheromone rate in expectation. Furthermore, four brief ACO algorithms are investigated by using the proposed theoretical results as case studies. Finally, the conclusions of the case studies that the pheromone rate and its deviation determine the expected convergence time are numerically verified with the experiment results of four one-ant ACO algorithms and four ten-ant ACO algorithms.   相似文献   

16.
In this paper, weighted stochastic gradient (WSG) algorithms for ARX models are proposed by modifying the standard stochastic gradient identification algorithms. In the proposed algorithms, the correction term is a weighting combination of the correction terms of the standard stochastic gradient (SG) algorithm in the current and last recursive steps. In addition, a latest estimation based WSG (LE‐WSG) algorithm is also established. The convergence performance of the proposed LE‐WSG algorithm is then analyzed. It is shown by a numerical example that both the WSG and LE‐WSG algorithms can possess faster convergence speed and higher convergence precision compared with the standard SG algorithms if the weighting factor is appropriately chosen.  相似文献   

17.
元胞遗传算法是空间结构化种群的遗传算法,将遗传操作限制在相邻个体之间进行,限制优势基因的扩散速度,保持种群的多样性,改善遗传算法的性能。但是,目前有关元胞遗传算法收敛性的分析还较缺乏。文中根据元胞遗传算法的特性,建立元胞遗传算法的吸收态 Markov链模型,证明元胞遗传算法的收敛性。提出元胞遗传算法的首达最优解期望时间的估算方法,并估计标准同步元胞遗传算法首达最优解期望时间的上下界。  相似文献   

18.
EM-type algorithms are popular tools for modal estimation and the most widely used parameter estimation procedures in statistical modeling. However, they are often criticized for their slow convergence. Despite the appearance of numerous acceleration techniques along the last decades, their use has been limited because they are either difficult to implement or not general. In the present paper, a new generation of fast, general and simple maximum likelihood estimation (MLE) algorithms is presented. In these cyclic iterative algorithms, extrapolation techniques are integrated with the iterations in gradient-based MLE algorithms, with the objective of accelerating the convergence of the base iterations. Some new complementary strategies like cycling, squaring and alternating are added to that processes. The presented schemes generally exhibit either fast-linear or superlinear convergence. Numerical illustrations allow us to compare a selection of its variants and generally confirm that this category is extremely simple as well as fast.  相似文献   

19.
A parallel randomized support vector machine (PRSVM) and a parallel randomized support vector regression (PRSVR) algorithm based on a randomized sampling technique are proposed in this paper. The proposed PRSVM and PRSVR have four major advantages over previous methods. (1) We prove that the proposed algorithms achieve an average convergence rate that is so far the fastest bounded convergence rate, among all SVM decomposition training algorithms to the best of our knowledge. The fast average convergence bound is achieved by a unique priority based sampling mechanism. (2) Unlike previous work (Provably fast training algorithm for support vector machines, 2001) the proposed algorithms work for general linear-nonseparable SVM and general non-linear SVR problems. This improvement is achieved by modeling new LP-type problems based on Karush–Kuhn–Tucker optimality conditions. (3) The proposed algorithms are the first parallel version of randomized sampling algorithms for SVM and SVR. Both the analytical convergence bound and the numerical results in a real application show that the proposed algorithm has good scalability. (4) We present demonstrations of the algorithms based on both synthetic data and data obtained from a real word application. Performance comparisons with SVMlight show that the proposed algorithms may be efficiently implemented.  相似文献   

20.
J. T. Marti 《Computing》1979,21(2):105-111
New Proofs are given for the known convergence of the additive and linear (i.e. unconstrained) ART algorithms. These algorithms belong to a class of methods for the reconstruction of digitized pictures from one-dimensional views which are used e. g. in x-ray tomography. Avoiding the detour of solving systems of inequalities, the first proof gives, simultaneously and in a very direct way, the convergence of both the additive and the linear algorithms. A second proof shows the geometric convergence of the linear algorithm by using elementary matrix algebra only.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号