共查询到20条相似文献,搜索用时 9 毫秒
1.
随机梯度下降(SGD)算法是处理大规模数据的有效方法之一.黑箱方法SGD在强凸条件下能达到最优的O(1/T)收敛速率,但对于求解L1+L2正则化学习问题的结构优化算法,如COMID(composite objective mirror descent)仅具有O(lnT/T)的收敛速率.提出一种能够保证稀疏性基于COMID的加权算法,证明了其不仅具有O(1/T)的收敛速率,还具有on-the-fly计算的优点,从而减少了计算代价.实验结果表明了理论分析的正确性和所提算法的有效性. 相似文献
2.
3.
T. P. Krasulina 《Automation and Remote Control》2018,79(2):286-288
We study the one-sided convergence of the modified Anbar’s process in previously unexplored cases of the choice of the sequence of steps. 相似文献
4.
5.
陶卿马坡张梦晗陶蔚 《数据采集与处理》2017,32(1):17-25
随机优化方法是求解大规模机器学习问题的主流方法,其研究的焦点问题是算法是否达到最优收敛速率与能否保证学习问题的结构。目前,正则化损失函数问题已得到了众多形式的随机优化算法,但绝大多数只是对迭代进行
平均的输出方式讨论了收敛速率,甚至无法保证最为典型的稀疏结构。与之不同的是,个体解能很好保持稀疏性,其最优收敛速率已经作为open问题被广泛探索。另外,随机优化普遍采用的梯度无偏假设往往不成立,加速方法收敛界中的偏差在有偏情形下会随迭代累积,从而无法应用。本文对一阶随机梯度方法的研究现状及存在的问题进行综述,其中包括个体收敛速率、梯度有偏情形以及非凸优化问题,并在此基础上指出了一些值得研究的问题。 相似文献
6.
水波优化(Water Wave Optimization,WWO)算法是一种受浅水波现象启发的新兴进化算法,它通过模拟水波的传播、折射、碎浪等运动机制来在高维解空间中进行高效搜索。该算法已被证明在大量基准测试问题和工程实际问题上优于其它许多前沿的启发式优化算法。从理论上分析了WWO算法的收敛性条件。通过对目标问题和算法参数设置的简化,证明了WWO中任何个体在两种特殊情况下都是收敛的:(1)只执行传播操作;(2)只执行折射操作。这两种情况分别对应两种特殊的适应度变化状态。进行了数值仿真实验,验证了上述两种收敛性条件。 相似文献
7.
It is shown that a global optimization algorithm using ordinal comparison converges to the good enough designs with a convergence rate faster than any polynomial of the total computing budget. The result is established in the context of one-dependent regenerative processes under the exponential stability condition of the underlying systems. A systematic approach is introduced to check the exponential stability condition by the stochastic Lyapunov function criterion. Examples arising in queueing theory are employed to illustrate the developed criterion. 相似文献
8.
多元优化算法及其收敛性分析 总被引:1,自引:0,他引:1
提出了一种搜索个体分工明确、协同合作的群智能优化算法,并从理论上证明了其收敛性. 由于搜索个体(搜索元)具有分工不同的多元化特点,所以我们称该算法为多元优化算法(Multivariant optimization algorithm, MOA).多元优化算法中, 全局搜索元和局部搜索元基于数据表高效的记录和分享信息以协同合作对解空间进行搜索. 在一次迭代中,全局搜索元搜索整个解空间以寻找潜在解区域,然后具有不同种群大小的局部搜索元组对潜力不同的历史潜在解区域以及新发现的潜在解区域进行不同粒度的搜索. 搜索元找到的较优解按照一定的规则保存在由队列和堆栈组成的结构体中以实现历史信息的高效记忆和共享. 结构体中保存的候选解在迭代过程中不断更新逐渐接近最优解,最终找到优化问题的多个全局最优解以及局部次优解.基于马尔科夫过程的理论分析表明:多元优化算法以概率1 收敛于全局最优解.为了评估多元优化算法的收敛性,本文利用多元优化算法以及其他五个常用的优化算法对十三个二维及十维标准测试函数进行了寻优测试. 实验结果表明,多元优化算法在收敛成功率和收敛精度方面优于其他参与比较的算法. 相似文献
9.
Jie Hou Xianlin Zeng Gang Wang Jian Sun Jie Chen 《IEEE/CAA Journal of Automatica Sinica》2023,10(3):685-699
This paper considers distributed stochastic optimization,in which a number of agents cooperate to optimize a global objective function through local computations and information exchanges with neighbors over a network.Stochastic optimization problems are usually tackled by variants of projected stochastic gradient descent.However,projecting a point onto a feasible set is often expensive.The Frank-Wolfe(FW) method has well-documented merits in handling convex constraints,but existing stochastic F... 相似文献
10.
11.
Dual Stochastic Dominance and Quantile Risk Measures 总被引:1,自引:0,他引:1
Wlodzimierz Ogryczak & Andrzej Ruszczyski 《International Transactions in Operational Research》2002,9(5):661-680
Following the seminal work by Markowitz, the portfolio selection problem is usually modeled as a bicriteria optimization problem where a reasonable trade–off between expected rate of return and risk is sought. In the classical Markowitz model, the risk is measured with variance. Several other risk measures have been later considered thus creating the entire family of mean–risk (Markowitz type) models. In this paper, we analyze mean–risk models using quantiles and tail characteristics of the distribution. Value at risk (VAR), defined as the maximum loss at a specified confidence level, is a widely used quantile risk measure. The corresponding second order quantile measure, called the worst conditional expectation or Tail VAR, represents the mean shortfall at a specified confidence level. It has more attractive theoretical properties and it leads to LP solvable portfolio optimization models in the case of discrete random variables, i.e., in the case of returns defined by their realizations under the specified scenarios. We show that the mean–risk models using the worst conditional expectation or some of its extensions are in harmony with the stochastic dominance order. For this purpose, we exploit duality relations of convex analysis to develop the quantile model of stochastic dominance for general distributions. 相似文献
12.
随机装卸工问题的粒子群算法 总被引:1,自引:0,他引:1
在装卸工问题的基础上提出了随机装卸工问题及其求解策略。根据问题的特点设计了相应的粒子群优化算法,并通过数值算例就其求解精度和速度与标准遗传算法进行了对比分析。 相似文献
13.
14.
Consideration was given to the problem of stochastic programming with the quantile (VaR) criterion. Conditions related with the characteristics of probabilistic distributions under which the quantile function is convex in strategy were presented. Relationship between convexity of the quantile function and convexity of the function of integral (CVaR) quantile criterion was shown. 相似文献
15.
一个多目标优化演化算法的收敛性分析框架 总被引:2,自引:2,他引:2
由于演化算法求解多目标优化问题所得结果是一个优化解集——Pareto最优集,而现有的演化算法收敛性分析只适合针对单目标优化问题的单个。用有限马尔科夫链给出了演化算法求解多目标优化问题的收敛性分析框架,并给出了一个分析实例。 相似文献
16.
17.
Leyuan Shi 《Discrete Event Dynamic Systems》2000,10(3):271-294
Stochastic discrete resource allocation problems are difficult to solve. In this paper, we propose a new algorithm designed specifically to tackle them. The algorithm combines with the Nested Partitions method, the Ordinal Optimization techniques, and an efficient simulation control technique. The resulting hybrid algorithm retains the global perspective of the Nested Partitions method and the fast convergence properties of the Ordinal Optimization. Numerical results demonstrate that the hybrid algorithm can be effectively used for many large-scale stochastic discrete optimization problems. 相似文献
18.
针对粒子群优化(PSO)算法在处理复杂优化问题时,容易早熟收敛的问题,将比例控制器用于粒子群算法种群聚集度控制.粒子种群可以在任一聚集范围内保持任意时间的搜索,这样能够更好地平衡种群聚集度和搜索精度,从而提高PSO算法处理复杂优化问题的效率.对多零点和低旁瓣约束情况下的阵列天线方向图优化进行仿真实验,结果表明所提算法可在处理复杂优化问题上取得更好的优化效果. 相似文献
19.
遗传算法的机理与收敛性研究 总被引:1,自引:0,他引:1
采用一种新的基于解空间分解的定量分析方法,对遗传算法的种群进化过程进行分析,阐明了选择、交叉和变异操作的寻优机理,给出了子代种群在解空间上的概率分布情况;理论上,证明了遗传算法具备寻找全局最优解的能力,并给出了具备寻找全局最优解能力的充分必要条件,即证明了积木块假设的结论是成立的.同时,建立了二进制编码有限群体的M arkov链模型,计算出在用于静态优化问题的交叉和变异操作下,种群在解空间上概率分布情况以及收敛到最优解的概率,并讨论了产生早熟现象和GA-欺骗问题的原因. 相似文献
20.
随机梯度下降(stochastic gradient descent, SGD)是一种求解大规模优化问题的简单高效方法,近期的研究表明,在求解强凸优化问题时其收敛速率可通过α-suffix平均技巧得到有效的提升.但SGD属于黑箱方法,难以得到正则化优化问题所期望的实际结构效果.另一方面,COMID(composite objective mirror descent)是一种能保证L1正则化结构的稀疏随机算法,但对于强凸优化问题其收敛速率仅为O(logT/T).主要考虑“L1+Hinge”优化问题,首先引入L2强凸项将其转化为强凸优化问题,进而将COMID算法和α-suffix平均技巧结合得到L1MD-α算法.证明了L1MD-α具有O(1/T)的收敛速率,并且获得了比COMID更好的稀疏性.大规模数据库上的实验验证了理论分析的正确性和所提算法的有效性. 相似文献