首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到17条相似文献,搜索用时 281 毫秒
1.
高乾坤 《微机发展》2014,(2):96-100
交替方向乘子法(ADMM)在机器学习问题研究中已有一些高效的实际应用,但为了适应大规模数据的处理和求解非光滑损失凸优化问题,文中提出对原ADMM进行改进,得到了损失函数线性化的ADMM的在线优化算法。该在线算法相较原算法具有操作简单、计算高效等特点。通过详尽的理论分析,文中证明了新在线算法的收敛性,并得到其在一般凸条件下具有目前最优的Regret界以及随机收敛速度。最后在与当今流行在线算法的对比实验中验证了新在线算法的高效可行性。  相似文献   

2.
图像修复TV模型的快速算法研究   总被引:1,自引:0,他引:1  
关于图像修复的全变分( TV)模型的求解有很多方法。在图像修复的全变分( TV)模型中,文中针对含有非光滑项的凸优化问题提出了一种基于交替方向乘子法( ADMM)的快速求解算法。 ADMM方法对迭代公式中具体的子问题求解过程一般采用Gauss-Seidel方法,文中通过分析TV修复模型的性质,对ADMM算法进行了相应的改进,使得具体的数值求解可以用快速傅里叶变换方法,并证明了该算法的收敛性。实验结果表明,文中所提出的新算法与采用Gauss-Seidel迭代的方法相比较,不但修复效果更好,而且修复速度更快。  相似文献   

3.
杨旭  王锐  张涛 《控制理论与应用》2020,37(11):2291-2302
在电–气–热互联系统(EGHS)的联合优化愈受关注的背景下, 提出一种电–气–热互联系统分布式优化调度 框架. 首先, 以系统供能成本最小建立同时考虑气网及热网动态特性的日前调度模型. 其次, 针对电–气–热互联系 统含电、气、热3个子系统在分布式运算属三区(3-Block)优化问题因而难以利用常规分布式算法得到收敛解的问题, 提出基于交替方向乘子法(ADMM)的改进算法, 即强制平等的ADMM算法. 所提算法框架为内外层协调凸分布框 架, 外层为罚凸凹算法(PCCP), 内层为ADMM–FE算法. 此算法框架中, 外层优化利用罚凸凹过程将非凸气流方程 凸化为逐次迭代的二阶锥约束, 内层ADMM–FE算法求解外层凸化后的模型以得到收敛解. 最后, 通过算例仿真分 析对比了所提算法与传统ADMM算法及集中式优化算法的计算结果, 所得结果验证了所提模型以及优化算法框架 的有效性.  相似文献   

4.
韦洪旭  陇盛  陶蔚  陶卿 《计算机科学》2023,(11):220-226
自适应策略与动量法是提升优化算法性能的常用方法。目前自适应梯度方法大多采用AdaGrad型策略,但该策略在约束优化中效果不佳,为此,研究人员提出了更适用于处理约束问题的AdaGrad+方法,但其与SGD一样在非光滑凸情形下未达到最优个体收敛速率,结合NAG动量也并未达到预期的加速效果。针对上述问题,文中将AdaGrad+调整步长的策略与Heavy-Ball型动量法加速收敛的优点相结合,提出了一种基于AdaGrad+的自适应动量法。通过设置加权动量项、巧妙选取时变参数和灵活处理自适应矩阵,证明了该方法对于非光滑一般凸问题具有最优个体收敛速率。最后在l∞∞范数约束下,通过求解典型的hinge损失函数优化问题验证了理论分析的正确性,通过深度卷积神经网络训练实验验证了该方法在实际应用中也具有良好性能。  相似文献   

5.
丛爽  丁娇  张坤 《控制理论与应用》2020,37(7):1667-1672
本文将含有稀疏干扰的量子状态估计问题, 转化为考虑量子状态的约束条件下, 分别求解密度矩阵的核范 数, 以及稀疏干扰l1范数的两个子问题的优化问题. 针对迭代收缩阈值算法(ISTA)所存在的收敛速度慢的问题, 通 过在两个子问题的迭代估计中, 引入一个加速算子, 对当前值与前一次值之差进行进一步的补偿, 来提高算法的迭 代速度(FISTA). 并将FISTA算法应用于求解含有稀疏干扰的量子状态估计中. 针对5个量子位的状态估计的仿真实 验, 将FISTA分别与ISTA、交替方向乘子法(ADMM)、不动点方程的ADMM算法(FP–ADMM), 以及非精确的ADMM 算法(I–ADMM)4种优化算法进行性能对比. 实验结果表明, FISTA算法具有更加优越的收敛速度, 并且能够得到更 小的量子状态估计误差.  相似文献   

6.
针对非光滑损失问题提出一种新的坐标下降算法,采用排序搜索的方式求解子问题解析解。分析了算法的时间复杂度,并给出了三种提高收敛速度的实用技巧。实验表明算法对正则化Hinge损失问题具有良好的性能,达到了预期的效果。  相似文献   

7.
目的压缩感知信号重构过程是求解不定线性系统稀疏解的过程。针对不定线性系统稀疏解3种求解方法不够鲁棒的问题:最小化l0-范数属于NP问题,最小化l1-范数的无解情况以及最小化lp-范数的非凸问题,提出一种基于光滑正则凸优化的方法进行求解。方法为了获得全局最优解并保证算法的鲁棒性,首先,设计了全空间信号l0-范数凸拟合函数作为优化的目标函数;其次,将n元函数优化问题转变为n个一元函数优化问题;最后,求解过程中利用快速收缩算法进行求解,使收敛速度达到二阶收敛。结果该算法无论在仿真数据集还是在真实数据集上,都取得了优于其他3种类型算法的效果。在仿真实验中,当信号维数大于150维时,该方法重构时间为其他算法的50%左右,具有快速性;在真实数据实验中,该方法重构出的信号与原始信号差的F-范数为其他算法的70%,具有良好的鲁棒性。结论本文算法为二阶收敛的凸优化算法,可确保快速收敛到全局最优解,适合处理大型数据,在信息检索、字典学习和图像压缩等领域具有较大的潜在应用价值。  相似文献   

8.
针对最小二乘支持向量回归机(LS-SVR)对异常值较敏感的问题,通过设置异常值所造成的损失上界,提出一种非凸的Ramp损失函数。该损失函数导致相应的优化问题的非凸性,利用凹凸过程(CCCP)将非凸优化问题转化为凸优化问题。给出Newton算法进行求解并分析了算法的计算复杂度。数据集测试的结果表明,与最小二乘支持向量回归机相比,该算法对异常值具有较强的鲁棒性,获得了更优的泛化能力,同时在运行时间上也具有明显优势。  相似文献   

9.
随着数字图像处理技术的高速发展,图像恢复被广泛应用于医学领域、军事领域、公共防卫领域及农业气象领域.本文综合TVL1、ROF、STVL1(Squares TVL1)、SHI模型,提出了非凸非光滑关于脉冲噪声去除模型,并使用变量分离技术的ADMM算法对模型进行求解,通常情况下,基于梯度的方法不适合非光滑优化,半二次(half-quadratic)和重权最小二乘算法(IRLS)在零点不可微分情况下不能应用到非光滑函数上,Graduated NonConvexity (GNC) algorithms跟踪非光滑和非凸的最小值沿着一系列近似的非光滑能量函数的势能,需要考虑其计算时间.为了处理模型的非凸非光滑项,本文应用多阶凸松弛方法对模型的子问题进行求解,虽然该方法仅导致原始非凸问题的局部最优解,但该局部解是对初始凸松弛的全局解的改进.此外,因为每个阶段都是凸优化问题,所以该方法在计算上是高效的.利用遗传算法对模型参数进行选择,通过在不同图片及不同噪声上的大量实验表明,该模型的鲁棒性、运行时间和ISNR、PSNR都优于其他三个模型.并且该模型能够保持图像的局部信息具有更好的可视化质量.  相似文献   

10.
提出一种新型的基于光滑Ramp损失函数的健壮支持向量机,能够有效抑制孤立点对泛化性能的影响,并采用CCCP将它的非凸优化目标函数转换成连续、二次可微的凸优化。在此基础上,给出训练健壮支持向量机的一种Newton型算法并且分析了算法的收敛性质。实验结果表明,提出的健壮支持向量机对孤立点不敏感,在各种数据集上均获得了比传统的SVMlight算法和Newton-Primal算法更优的泛化能力。  相似文献   

11.
陇盛  陶蔚  张泽东  陶卿 《软件学报》2022,33(4):1231-1243
与梯度下降法相比,自适应梯度下降方法(AdaGrad)利用过往平方梯度的算数平均保存了历史数据的几何信息,在处理稀疏数据时获得了更紧的收敛界.另一方面,Nesterov加速梯度方法(Nesterov's accelerated gradient,NAG)在梯度下降法的基础上添加了动量运算,在求解光滑凸优化问题时具有数量...  相似文献   

12.
Proximal Algorithms are known to be very popular in the area of signal processing, image reconstruction, variational inequality and convex optimization due to their small iteration costs and applicability to the non-smooth optimization problems. Various real-world machine learning problems have been solved utilizing the non-smooth convex loss minimization framework, and a recent trend is to design new accelerated algorithms to solve such frameworks efficiently. In this paper, we propose a novel viscosity-based accelerated gradient algorithm (VAGA), that utilizes the concept of viscosity approximation method of fixed point theory for solving the learning problems. We discuss the boundedness of the sequence generated by this iterative algorithm and prove the strong convergence of the algorithm under the few specific conditions. To test the practical performance of the algorithm on real-world problems, we applied it to solve the regularized multitask regression problem with sparsity-inducing regularizers. We present the detailed comparative analysis of our algorithm with few traditional proximal algorithms on three real benchmark multitask regression datasets. We also apply the proposed algorithm to the task of joint splice-site recognition problem of bio-informatics. The improved results demonstrate the efficacy of our algorithm over state-of-the-art proximal gradient descent algorithms. To the best of our knowledge, it is the first time that a viscosity-based iterative algorithm is applied to solve the real world problem of regression and recognition.  相似文献   

13.
许浩锋  凌青 《计算机应用》2015,35(6):1595-1599
针对如何对分布式网络采集的数据进行在线学习的问题,提出了一种基于交替方向乘子法(ADMM)的分布式在线学习优化算法--分布式在线交替方向乘子法(DOM)。首先,针对分布式在线学习需要各节点根据新采集的数据来更新本地估计,同时保持网络中所有节点的估计趋于一致这一问题,建立了数学模型并设计DOM算法对其进行求解。其次,针对分布式在线学习问题定义了Regret 界,用以表征在线估计的性能;证明了当本地即时损失函数是凸函数时,DOM算法是收敛的,并给出了其收敛速度。最后,通过数值仿真实验结果表明,相比现有的分布式在线梯度下降法(DOGD)和分布式在线自主学习算法(DAOL),所提出的DOM算法具有更快的收敛性能。  相似文献   

14.
The alternating direction method of multipliers (ADMM) has been successfully applied to solve structured convex optimization problems due to its superior practical performance. The convergence properties of the 2-block ADMM have been studied extensively in the literature. Specifically, it has been proven that the 2-block ADMM globally converges for any penalty parameter \(\gamma >0\). In this sense, the 2-block ADMM allows the parameter to be free, i.e., there is no need to restrict the value for the parameter when implementing this algorithm in order to ensure convergence. However, for the 3-block ADMM, Chen et al. (Math Program 155:57–79, 2016) recently constructed a counter-example showing that it can diverge if no further condition is imposed. The existing results on studying further sufficient conditions on guaranteeing the convergence of the 3-block ADMM usually require \(\gamma \) to be smaller than a certain bound, which is usually either difficult to compute or too small to make it a practical algorithm. In this paper, we show that the 3-block ADMM still globally converges with any penalty parameter \(\gamma >0\) if the third function \(f_3\) in the objective is smooth and strongly convex, and its condition number is in [1, 1.0798), besides some other mild conditions. This requirement covers an important class of problems to be called regularized least squares decomposition (RLSD) in this paper.  相似文献   

15.
This paper studies a distributed policy evaluation in multi-agent reinforcement learning. Under cooperative settings, each agent only obtains a local reward, while all agents share a common environmental state. To optimize the global return as the sum of local return, the agents exchange information with their neighbors through a communication network. The mean squared projected Bellman error minimization problem is reformulated as a constrained convex optimization problem with a consensus constraint; then, a distributed alternating directions method of multipliers (ADMM) algorithm is proposed to solve it. Furthermore, an inexact step for ADMM is used to achieve efficient computation at each iteration. The convergence of the proposed algorithm is established.  相似文献   

16.
Multi-block separable convex problems recently received considerable attention. Optimization problems of this type minimize separable convex objective functions with linear constraints. Challenges encountered in algorithmic development applying the classic alternating direction method of multipliers (ADMM) come from the fact that ADMM for the optimization problems of this type is not necessarily convergent. However, it is observed that ADMM applying to problems of this type outperforms numerically many of its variants with guaranteed theoretical convergence. The goal of this paper is to develop convergent and computationally efficient algorithms for solving multi-block separable convex problems. We first characterize the solutions of the optimization problems by proximity operators of the convex functions involved in their objective functions. We then design a class of two-step fixed-point iterative schemes for solving these problems based on the characterization. We further prove convergence of the iterative schemes and show that they have \(O\left( \frac{1}{k}\right) \) of convergence rate in the ergodic sense and the sense of the partial primal-dual gap, where k denotes the iteration number. Moreover, we derive specific two-step fixed-point proximity algorithms (2SFPPA) from the proposed iterative schemes and establish their global convergence. Numerical experiments for solving the sparse MRI problem demonstrate the numerical efficiency of the proposed 2SFPPA.  相似文献   

17.
In this paper, we present a multi-step memory gradient method with Goldstein line search for unconstrained optimization problems and prove its global convergence under some mild conditions. We also prove the linear convergence rate of the new method when the objective function is uniformly convex. Numerical results show that the new algorithm is suitable to solve large-scale optimization problems and is more stable than other similar methods in practical computation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号