首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
贝叶斯优化算法的发展综述   总被引:1,自引:0,他引:1  
介绍了贝叶斯优化算法,并针对不同的优化问题,结合经典优化方法提出的层次BOA算法、多目标层次BOA算法以及递进BOA算法,对贝叶斯优化算法的算法设计、理论分析和应用研究做了全面的总结.深入地探讨了贝叶斯优化算法计算量大,难以建立精确概率模型及扩展应用领域等问题.  相似文献   

2.
为将拓扑优化中的柔度最小化问题拓展到一般位移最小化问题,用有限元划分设计域,采用类桁架连续体材料模型,并假设杆件在设计域内连续分布.将杆件在节点位置的密度和方向作为设计变量,将指定位置和方向的位移作为目标函数,采用基于目标函数梯度的优化准则法,通过优化杆件的连续分布场形成拓扑优化的类桁架连续体.该方法可结合结构力学的基本概念,选择部分杆件形成拓扑优化刚架.  相似文献   

3.
信息融合理论研究进展:基于变分贝叶斯的联合优化   总被引:1,自引:0,他引:1  
潘泉  胡玉梅  兰华  孙帅  王增福  杨峰 《自动化学报》2019,45(7):1207-1223
通过梳理近年信息融合理论的发展,分析了复杂目标跟踪系统中存在的非线性、多模式、深耦合、网络化、高维数和未知扰动输入等问题,指出现阶段目标跟踪系统中联合优化的必要性.继而,讨论了解决联合优化问题的主要方法,包括联合检测与估计,联合聚类与估计,联合关联与估计及联合决策与估计等.同时,着重介绍了变分贝叶斯辨识、估计和优化的统一框架和以其为基础的目标跟踪联合一体优化方法,并以天波超视距雷达为应用背景,给出在多路径多模式多目标跟踪场景下算法的一般性描述.最后,讨论了变分贝叶斯理论在目标跟踪领域的开放问题和未来研究方向.  相似文献   

4.
基于模糊的多目标粒子群优化算法及应用   总被引:5,自引:0,他引:5  
粒子群优化算法的思想来源于人工生命和进化计算理论,由于其容易理解、易于实现,在很多领域得到了应用.由于传统的粒子群优化算法无法对多目标优化问题进行求解,因此文中利用模糊理论中的隶属度函数和给定的最优解评估选取原则,提出了一种适合求解约束型多目标优化问题的模糊粒子群算法(FPSO).模糊粒子群算法很好地解决了汽车零部件可靠性稳健优化设计的求解问题,仿真结果证明,该算法可行而有效,同时也拓展了粒子群算法的应用领域.  相似文献   

5.
王蓓  孙玉东  金晶  张涛  王行愚 《控制与决策》2019,34(6):1319-1324
高斯判别分析、朴素贝叶斯等传统贝叶斯分类方法在构建变量的联合概率分布时,往往会对变量间的相关性进行简化处理,从而使得贝叶斯决策理论中类条件概率密度的估计与实际数据之间存在一定的偏差.对此,结合Copula函数研究特征变量之间的相关性优化问题,设计基于D-vine Copula理论的贝叶斯分类器,主要目的是为了提高类条件概率密度估计的准确性.将变量的联合概率分布分解为一系列二元Copula函数与边缘概率密度函数的乘积,采用核函数方法对边缘概率密度进行估计 ,通过极大似然估计对二元Copula函数的参数分别进行优化,进而得到类条件概率密度函数的形式.将基于D-vine Copula理论的贝叶斯分类器应用到生物电信号的分类问题上,并对分类效果进行分析和验证.结果表明,所提出的方法在各项分类指标上均具备良好的性能.  相似文献   

6.
朱明敏  刘三阳  汪春峰 《自动化学报》2011,37(12):1514-1519
针对小样本数据集下学习贝叶斯网络 (Bayesian networks, BN)结构的不足, 以及随着条件集的增大, 利用统计方法进行条件独立 (Conditional independence, CI) 测试不稳定等问题, 提出了一种基于先验节点序学习网络结构的优化方法. 新方法通过定义优化目标函数和可行域空间, 首次将贝叶斯网络结构学习问题转化为求解目标函数极值的数学规划问题, 并给出最优解的存在性及唯一性证明, 为贝叶斯网络的不断扩展研究提出了新的方案. 理论证明以及实验结果显示了新方法的正确性和有效性.  相似文献   

7.
亢良伊  王建飞  刘杰  叶丹 《软件学报》2018,29(1):109-130
机器学习问题通常会转换成一个目标函数去求解,优化算法是求解目标函数中参数的重要工具.在大数据环境下,需要设计并行与分布式的优化算法,通过多核计算和分布式计算技术来加速训练过程.近年来,该领域涌现了大量研究工作,部分算法也在各机器学习平台得到广泛应用.本文针对梯度下降算法、二阶优化算法、邻近梯度算法、坐标下降算法、交替方向乘子算法五类最常见的优化方法展开研究,每一类算法分别从单机并行和分布式并行来分析相关研究成果,并从模型特性、输入数据特性、算法评价、并行计算模型等角度对每个算法进行详细对比.随后对有代表性的可扩展机器学习平台中优化算法的实现和应用情况进行对比分析.同时对本文中介绍的所有优化算法进行多层次分类,方便用户根据目标函数类型选择合适的优化算法,也可以通过该多层次分类图交叉探索如何将优化算法应用到新的目标函数类型.最后分析了现有优化算法存在的问题,提出可能的解决思路,并对未来研究方向进行展望.  相似文献   

8.
实际工程中的多目标优化问题往往具有黑箱特性且需要耗时的功能性评估,采用传统的进化优化方法求解,存在计算成本高昂且难以实现的问题.考虑代理优化方法在处理需要功能性评估工程设计问题中的高效性,提出一种小样本数据驱动下的贝叶斯SVR自适应建模及昂贵约束多目标代理优化方法.该方法在实现过程中选取贝叶斯SVR模型以减少功能性评估过程的昂贵仿真成本,利用最大化约束期望改进矩阵聚合策略进行新设计方案选取,并通过小样本信息的不断更新实现数据驱动下的贝叶斯SVR模型自适应更新和逐步优化.贝叶斯SVR模型具有强的边界刻画能力及预测不确定性度量功能,可为新样本挑选提供预测精度保障及潜在的改进方向.所提出的切比雪夫距离和曼哈顿距离聚合策略从样本填充的改进范围考虑,使其具有较强的改进边界探索能力,在多变量优化问题中具有计算复杂度低、适用性强的特点.测试函数及工程实例结果表明:1)所提出的方法可在小样本条件下有效减少昂贵仿真成本,提升昂贵约束多目标问题的优化效率;2)获取昂贵约束多目标问题的Pareto前沿在收敛性、多样性及空间分布性方面均具有一定优势.  相似文献   

9.
贝叶斯网络结构稀疏化学习因其既能简化结构又能保留原始网络中的重要信息,已经成为当前贝叶斯网络的研究热点.文中首先讨论贝叶斯网络结构稀疏学习的必要性、贝叶斯网络稀疏性的定义,并在此基础上介绍现有的贝叶斯网络结构稀疏学习研究思路.然后,回顾一般的贝叶斯网络结构学习方法,并分析它们在高维背景下存在的问题,进而发现基于评分的方法通常适合于贝叶斯网络结构的稀疏学习,因此重点介绍贝叶斯网络结构稀疏学习的目标函数和优化求解算法.最后,探讨未来贝叶斯网络结构稀疏学习的一些研究方向.  相似文献   

10.
段书晴  陈森  赵志良 《控制与决策》2022,37(6):1559-1566
研究一类具有未知外部干扰的一阶多智能体系统的分布式优化问题.在分布式优化任务中,每个智能体只被容许利用自己的局部目标函数和邻居的状态信息,设计一个分布式优化算法,使全局目标函数取得最小值,其中全局目标函数是所有局部目标函数之和.针对该问题,首先提出由扩张状态观测器和优化算法组成的自抗扰分布式优化算法.其次,在Lyapunov稳定性的基础上发展新的方法,对闭环系统的收敛性和稳定性进行严格的证明;当外部干扰为常值时,所设计的优化算法能使所有智能体的状态指数收敛到全局目标函数的最小值;当外部干扰为有界干扰时,通过调整扩张状态观测器的增益参数,所设计的优化算法能使所有智能体的状态收敛到全局目标函数最小值的任意小的邻域内.最后,仿真结果表明了该优化算法的有效性.  相似文献   

11.
连续函数优化的一种新方法-蚁群算法   总被引:4,自引:2,他引:4  
针对连续函数优化问题,给出了一种基于蚂蚁群体智能搜索的随机搜索算法,对目标函数没有可微的要求,可有效克服经典算法易于陷入局部最优解的常见弊病。对基本的蚁群算法做了一定的改进,通过几个函数寻优的结果表明,算法具有良好的效果。同时,运用遗传算法对蚁群算法中的一些重要参数进行了寻优,提高了蚁群算法的收敛速度。  相似文献   

12.
Constrained optimization of high-dimensional numerical problems plays an important role in many scientific and industrial applications. Function evaluations in many industrial applications are severely limited and often only little analytical information about objective function and constraint functions is available. For such expensive black-box optimization tasks, the constraint optimization algorithm COBRA (Constrained Optimization By Radial Basis Function Approximation) was proposed, making use of RBF (radial basis function) surrogate modeling for both objective and constraint functions. COBRA has shown remarkable success in solving reliably complex benchmark problems in less than 500 function evaluations. Unfortunately, COBRA requires careful adjustment of parameters in order to do so.In this work we present a new algorithm SACOBRA (Self-Adjusting COBRA), which is based on COBRA and capable of achieving high-quality results with very few function evaluations and no parameter tuning. It is shown with the help of performance profiles on a set of benchmark problems (G-problems, MOPTA08) that SACOBRA consistently outperforms COBRA algorithms with different fixed parameter settings. We analyze the importance of the new elements in SACOBRA and show that each element of SACOBRA plays a role to boost up the overall optimization performance. We discuss the reasons and get in this way a better understanding of high-quality RBF surrogate modeling.  相似文献   

13.
Since Hopfield's seminal work on energy functions for neural networks and their consequence for the approximate solution of optimization problems, much attention has been devoted to neural heuristics for combinatorial optimization. These heuristics are often very time-consuming because of the need for randomization or Monte Carlo simulation during the search for solutions. In this paper, we propose a general energy function for a new neural model, the random neural model of Gelenbe. This model proposes a scheme of interaction between the neurons and not a dynamic equation of the system. Then, we apply this general energy function to different optimization problems.  相似文献   

14.
This paper introduces a surrogate model based algorithm for computationally expensive mixed-integer black-box global optimization problems with both binary and non-binary integer variables that may have computationally expensive constraints. The goal is to find accurate solutions with relatively few function evaluations. A radial basis function surrogate model (response surface) is used to select candidates for integer and continuous decision variable points at which the computationally expensive objective and constraint functions are to be evaluated. In every iteration multiple new points are selected based on different methods, and the function evaluations are done in parallel. The algorithm converges to the global optimum almost surely. The performance of this new algorithm, SO-MI, is compared to a branch and bound algorithm for nonlinear problems, a genetic algorithm, and the NOMAD (Nonsmooth Optimization by Mesh Adaptive Direct Search) algorithm for mixed-integer problems on 16 test problems from the literature (constrained, unconstrained, unimodal and multimodal problems), as well as on two application problems arising from structural optimization, and three application problems from optimal reliability design. The numerical experiments show that SO-MI reaches significantly better results than the other algorithms when the number of function evaluations is very restricted (200–300 evaluations).  相似文献   

15.
In the real-world applications, most optimization problems are subject to different types of constraints. These problems are known as constrained optimization problems (COPs). Solving COPs is a very important area in the optimization field. In this paper, a hybrid multi-swarm particle swarm optimization (HMPSO) is proposed to deal with COPs. This method adopts a parallel search operator in which the current swarm is partitioned into several subswarms and particle swarm optimization (PSO) is severed as the search engine for each sub-swarm. Moreover, in order to explore more promising regions of the search space, differential evolution (DE) is incorporated to improve the personal best of each particle. First, the method is tested on 13 benchmark test functions and compared with three stateof-the-art approaches. The simulation results indicate that the proposed HMPSO is highly competitive in solving the 13 benchmark test functions. Afterward, the effectiveness of some mechanisms proposed in this paper and the effect of the parameter setting were validated by various experiments. Finally, HMPSO is further applied to solve 24 benchmark test functions collected in the 2006 IEEE Congress on Evolutionary Computation (CEC2006) and the experimental results indicate that HMPSO is able to deal with 22 test functions.  相似文献   

16.
We propose an algorithm for solving nonsmooth, nonconvex, constrained optimization problems as well as a new set of visualization tools for comparing the performance of optimization algorithms. Our algorithm is a sequential quadratic optimization method that employs Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton Hessian approximations and an exact penalty function whose parameter is controlled using a steering strategy. While our method has no convergence guarantees, we have found it to perform very well in practice on challenging test problems in controller design involving both locally Lipschitz and non-locally-Lipschitz objective and constraint functions with constraints that are typically active at local minimizers. In order to empirically validate and compare our method with available alternatives—on a new test set of 200 problems of varying sizes—we employ new visualization tools which we call relative minimization profiles. Such profiles are designed to simultaneously assess the relative performance of several algorithms with respect to objective quality, feasibility, and speed of progress, highlighting the trade-offs between these measures when comparing algorithm performance.  相似文献   

17.
The social foraging behavior of Escherichia coli bacteria has been used to solve optimization problems. This paper proposes a hybrid approach involving genetic algorithms (GA) and bacterial foraging (BF) algorithms for function optimization problems. We first illustrate the proposed method using four test functions and the performance of the algorithm is studied with an emphasis on mutation, crossover, variation of step sizes, chemotactic steps, and the lifetime of the bacteria. The proposed algorithm is then used to tune a PID controller of an automatic voltage regulator (AVR). Simulation results clearly illustrate that the proposed approach is very efficient and could easily be extended for other global optimization problems.  相似文献   

18.
多目标进化算法因其在解决含有多个矛盾目标函数的多目标优化问题中的强大处理能力,正受到越来越多的关注与研究。极值优化作为一种新型的进化算法,已在各种离散优化、连续优化测试函数以及工程优化问题中得到了较为成功的应用,但有关多目标EO算法的研究却十分有限。本文将采用Pareto优化的基本原理引入到极值优化算法中,提出一种求解连续多目标优化问题的基于多点非均匀变异的多目标极值优化算法。通过对六个国际公认的连续多目标优化测试函数的仿真实验结果表明:本文提出算法相比NSGA-II、 PAES、SPEA和SPEA2等经典多目标优化算法在收敛性和分布性方面均具有优势。  相似文献   

19.
为解决传统核极限学习机算法参数优化困难的问题,提高分类准确度,提出一种改进贝叶斯优化的核极限学习机算法。用樽海鞘群设计贝叶斯优化框架中获取函数的下置信界策略,提高算法的局部搜索能力和寻优能力;用这种改进的贝叶斯优化算法对核极限学习机的参数进行寻优,用最优参数构造核极限学习机分类器。在UCI真实数据集上进行仿真实验,实验结果表明,相比传统贝叶斯优化算法,所提算法能提升核极限学习机的分类精度,相较其它优化算法,所提算法可行有效。  相似文献   

20.
一类用于连续域寻优的蚁群算法   总被引:1,自引:0,他引:1  
由真实蚁群觅食行为启发而来的经典蚁群算法,非常适合解决组合优化问题,但经典蚁群算法的离散性本质也限制了其在连续空间问题求解中的应用。为此,提出了一种用于连续域寻优的改进蚁群算法。局部搜索上基于解决离散域问题的经典蚁群优化思想,全局搜索利用类似于遗传算法的交叉、变异操作-称为Ant Diffusion和Ant Walk方法,每代寻优结束后均采用"精英策略"把本代最优个体保留到下一代中。最后,采用改进算法对几个基准函数做了寻优测试,都取得了良好的效果,证明了算法的有效性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号