首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到17条相似文献,搜索用时 203 毫秒
1.
交替方向乘子法(ADMM)在机器学习问题中已有一些实际应用。针对大规模数据的处理和非光滑损失凸优化问题,将镜面下降方法引入原ADMM批处理算法,得到了一种新的改进算法,并在此基础上提出了一种求解非光滑损失凸优化问题的坐标优化算法。该算法具有操作简单、计算高效的特点。通过详尽的理论分析,证明了新算法的收敛性,在一般凸条件下其具有目前最优的收敛速度。最后与相关算法进行了对比,实验结果表明该算法在保证解稀疏性的同时拥有更快的收敛速度。  相似文献   

2.
图像修复TV模型的快速算法研究   总被引:1,自引:0,他引:1  
关于图像修复的全变分( TV)模型的求解有很多方法。在图像修复的全变分( TV)模型中,文中针对含有非光滑项的凸优化问题提出了一种基于交替方向乘子法( ADMM)的快速求解算法。 ADMM方法对迭代公式中具体的子问题求解过程一般采用Gauss-Seidel方法,文中通过分析TV修复模型的性质,对ADMM算法进行了相应的改进,使得具体的数值求解可以用快速傅里叶变换方法,并证明了该算法的收敛性。实验结果表明,文中所提出的新算法与采用Gauss-Seidel迭代的方法相比较,不但修复效果更好,而且修复速度更快。  相似文献   

3.
杨旭  王锐  张涛 《控制理论与应用》2020,37(11):2291-2302
在电–气–热互联系统(EGHS)的联合优化愈受关注的背景下, 提出一种电–气–热互联系统分布式优化调度 框架. 首先, 以系统供能成本最小建立同时考虑气网及热网动态特性的日前调度模型. 其次, 针对电–气–热互联系 统含电、气、热3个子系统在分布式运算属三区(3-Block)优化问题因而难以利用常规分布式算法得到收敛解的问题, 提出基于交替方向乘子法(ADMM)的改进算法, 即强制平等的ADMM算法. 所提算法框架为内外层协调凸分布框 架, 外层为罚凸凹算法(PCCP), 内层为ADMM–FE算法. 此算法框架中, 外层优化利用罚凸凹过程将非凸气流方程 凸化为逐次迭代的二阶锥约束, 内层ADMM–FE算法求解外层凸化后的模型以得到收敛解. 最后, 通过算例仿真分 析对比了所提算法与传统ADMM算法及集中式优化算法的计算结果, 所得结果验证了所提模型以及优化算法框架 的有效性.  相似文献   

4.
许浩锋  凌青 《计算机应用》2015,35(6):1595-1599
针对如何对分布式网络采集的数据进行在线学习的问题,提出了一种基于交替方向乘子法(ADMM)的分布式在线学习优化算法--分布式在线交替方向乘子法(DOM)。首先,针对分布式在线学习需要各节点根据新采集的数据来更新本地估计,同时保持网络中所有节点的估计趋于一致这一问题,建立了数学模型并设计DOM算法对其进行求解。其次,针对分布式在线学习问题定义了Regret 界,用以表征在线估计的性能;证明了当本地即时损失函数是凸函数时,DOM算法是收敛的,并给出了其收敛速度。最后,通过数值仿真实验结果表明,相比现有的分布式在线梯度下降法(DOGD)和分布式在线自主学习算法(DAOL),所提出的DOM算法具有更快的收敛性能。  相似文献   

5.
高效检索是数字图书馆的核心业务之一,其中排序是高效信息检索的核心问题。给定一系列的书目列表,利用排序模型生成目标书目的排序列表。将学习排序算法应用于信息检索领域时,常用方法是通过最小化pairwise损失函数值来优化排序模型。然而,已有结论表明,pairwise损失值最小化不一定能得到listwise算法的最佳排序性能。并且将在线学习排序算法与listwise算法相结合也非常困难。提出了一种基于listwise的在线学习排序算法,旨在保证listwise算法性能优势的前提下,实现在线学习排序算法,从而降低检索复杂度。首先解决将在线学习排序算法与listwise算法相结合的问题;然后通过最小化基于预测列表和真实列表定义的损失函数来优化排序模型;最后提出基于online-listwise算法的自适应学习率。实验结果表明,所提出算法具有较好的检索性能和检索速度。  相似文献   

6.
Adam是目前深度神经网络训练中广泛采用的一种优化算法框架,同时使用了自适应步长和动量技巧,克服了SGD的一些固有缺陷。但即使对于凸优化问题,目前Adam也只是在线学习框架下给出了和梯度下降法一样的regret界,动量的加速特性并没有得到体现。这里针对非光滑凸优化问题,通过巧妙选取动量和步长参数,证明了Adam的改进型具有最优的个体收敛速率,从而说明了Adam同时具有自适应和加速的优点。通过求解 ${l_1}$ 范数约束下的hinge损失问题,实验验证了理论分析的正确性和在算法保持稀疏性方面的良好性能。  相似文献   

7.
丛爽  丁娇  张坤 《控制理论与应用》2020,37(7):1667-1672
本文将含有稀疏干扰的量子状态估计问题, 转化为考虑量子状态的约束条件下, 分别求解密度矩阵的核范 数, 以及稀疏干扰l1范数的两个子问题的优化问题. 针对迭代收缩阈值算法(ISTA)所存在的收敛速度慢的问题, 通 过在两个子问题的迭代估计中, 引入一个加速算子, 对当前值与前一次值之差进行进一步的补偿, 来提高算法的迭 代速度(FISTA). 并将FISTA算法应用于求解含有稀疏干扰的量子状态估计中. 针对5个量子位的状态估计的仿真实 验, 将FISTA分别与ISTA、交替方向乘子法(ADMM)、不动点方程的ADMM算法(FP–ADMM), 以及非精确的ADMM 算法(I–ADMM)4种优化算法进行性能对比. 实验结果表明, FISTA算法具有更加优越的收敛速度, 并且能够得到更 小的量子状态估计误差.  相似文献   

8.
针对最小二乘支持向量回归机(LS-SVR)对异常值较敏感的问题,通过设置异常值所造成的损失上界,提出一种非凸的Ramp损失函数。该损失函数导致相应的优化问题的非凸性,利用凹凸过程(CCCP)将非凸优化问题转化为凸优化问题。给出Newton算法进行求解并分析了算法的计算复杂度。数据集测试的结果表明,与最小二乘支持向量回归机相比,该算法对异常值具有较强的鲁棒性,获得了更优的泛化能力,同时在运行时间上也具有明显优势。  相似文献   

9.
一种求解混合整数非线性规划问题的模拟退火算法   总被引:6,自引:0,他引:6  
通过适当处理离散变量,将求解无约束非凸NLP问题的高效模拟退火全局优化算法推广到求解一般非凸混合整数非线性规划问题。数值计算结果表明,文中模拟退火算法在适用性、解的质量和计算效率等方面优于其它方法,是求解一般非凸MINLP问题的一种有效的全局优化算法。  相似文献   

10.
基于局部抽象凸支撑面的多模态优化算法   总被引:1,自引:0,他引:1  
在基本进化算法框架下,结合抽象凸理论,提出一种基于局部抽象凸支撑面的多模态优化算法.首先,采用模型变换方法将原优化问题转变为单位单纯形约束条件下的严格递增射线凸松弛问题;其次,针对新生成个体的邻域信息构建局部抽象凸支撑面,并利用局部下界知识动态识别种群模态,从而减少替换误差,避免出现早熟现象;最后,借助支撑面下降方向进一步实现模态内部的局部增强过程.数值研究表明,针对给定的绝大部分测试问题,提出的算法在精度和可靠性指标方面均优于文中给出的其他算法.  相似文献   

11.
This paper studies a distributed policy evaluation in multi-agent reinforcement learning. Under cooperative settings, each agent only obtains a local reward, while all agents share a common environmental state. To optimize the global return as the sum of local return, the agents exchange information with their neighbors through a communication network. The mean squared projected Bellman error minimization problem is reformulated as a constrained convex optimization problem with a consensus constraint; then, a distributed alternating directions method of multipliers (ADMM) algorithm is proposed to solve it. Furthermore, an inexact step for ADMM is used to achieve efficient computation at each iteration. The convergence of the proposed algorithm is established.  相似文献   

12.
The alternating direction method of multipliers (ADMM) has been successfully applied to solve structured convex optimization problems due to its superior practical performance. The convergence properties of the 2-block ADMM have been studied extensively in the literature. Specifically, it has been proven that the 2-block ADMM globally converges for any penalty parameter \(\gamma >0\). In this sense, the 2-block ADMM allows the parameter to be free, i.e., there is no need to restrict the value for the parameter when implementing this algorithm in order to ensure convergence. However, for the 3-block ADMM, Chen et al. (Math Program 155:57–79, 2016) recently constructed a counter-example showing that it can diverge if no further condition is imposed. The existing results on studying further sufficient conditions on guaranteeing the convergence of the 3-block ADMM usually require \(\gamma \) to be smaller than a certain bound, which is usually either difficult to compute or too small to make it a practical algorithm. In this paper, we show that the 3-block ADMM still globally converges with any penalty parameter \(\gamma >0\) if the third function \(f_3\) in the objective is smooth and strongly convex, and its condition number is in [1, 1.0798), besides some other mild conditions. This requirement covers an important class of problems to be called regularized least squares decomposition (RLSD) in this paper.  相似文献   

13.
张星航  郭艳  李宁  孙保明 《计算机科学》2017,44(10):99-102, 133
应用传统的压缩感知理论对天线阵列信号的波达方向(Direction-of-arrival,DOA) 进行估计,存在基的失配问题。基于交替方向乘子法 (Alternative Direction Method of Multiplier,ADMM) 的无网格压缩感知(Grid-less Compressive Sensing) 技术能够解决该问题,但仍存在收敛速度慢的缺陷。针对该缺陷, 提出带自适应惩罚项的ADMM (ADMM with adaptive penalty,AP-ADMM)算法,即根据输入信号的噪声功率,自适应地选择惩罚项的初始值;同时在算法迭代求解的过程中,自适应地对目标函数的惩罚项进行调整。与传统算法相比,在保证收敛精度和DOA的恢复成功概率的条件下,带自适应惩罚项的ADMM算法收敛速率明显加快。仿真结果验证了新算法的有效性。  相似文献   

14.
Multi-block separable convex problems recently received considerable attention. Optimization problems of this type minimize separable convex objective functions with linear constraints. Challenges encountered in algorithmic development applying the classic alternating direction method of multipliers (ADMM) come from the fact that ADMM for the optimization problems of this type is not necessarily convergent. However, it is observed that ADMM applying to problems of this type outperforms numerically many of its variants with guaranteed theoretical convergence. The goal of this paper is to develop convergent and computationally efficient algorithms for solving multi-block separable convex problems. We first characterize the solutions of the optimization problems by proximity operators of the convex functions involved in their objective functions. We then design a class of two-step fixed-point iterative schemes for solving these problems based on the characterization. We further prove convergence of the iterative schemes and show that they have \(O\left( \frac{1}{k}\right) \) of convergence rate in the ergodic sense and the sense of the partial primal-dual gap, where k denotes the iteration number. Moreover, we derive specific two-step fixed-point proximity algorithms (2SFPPA) from the proposed iterative schemes and establish their global convergence. Numerical experiments for solving the sparse MRI problem demonstrate the numerical efficiency of the proposed 2SFPPA.  相似文献   

15.
Multiplicative noise and blur removal problems have attracted much attention in recent years. In this paper, we propose an efficient minimization method to recover images from input blurred and multiplicative noisy images. In the proposed algorithm, we make use of the logarithm to transform blurring and multiplicative noise problems into additive image degradation problems, and then employ l 1-norm to measure in the data-fitting term and the total variation to measure the regularization term. The alternating direction method of multipliers (ADMM) is used to solve the corresponding minimization problem. In order to guarantee the convergence of the ADMM algorithm, we approximate the associated nonconvex domain of the minimization problem by a convex domain. Experimental results are given to demonstrate that the proposed algorithm performs better than the other existing methods in terms of speed and peak signal noise ratio.  相似文献   

16.
Hu  Jia  Guo  Tiande  Zhao  Tong 《Applied Intelligence》2022,52(12):14233-14245

Inspired by the fact that certain randomization schemes incorporated into the stochastic (proximal) gradient methods allow for a large reduction in computational time, we incorporate such a scheme into stochastic alternating direction method of multipliers (ADMM), yielding a faster stochastic alternating direction method (FSADM) for solving a class of large scale convex composite problems. In the numerical experiments, we observe a reduction of this method in computational time compared to previous methods. More importantly, we unify the stochastic ADMM for solving general convex and strongly convex composite problems (i.e., the iterative scheme does not change when the the problem goes from strongly convex to general convex). In addition, we establish the convergence rates of FSADM for these two cases.

  相似文献   

17.
ABSTRACT

We consider the problem of minimizing a smooth nonconvex function over a structured convex feasible set, that is, defined by two sets of constraints that are easy to treat when considered separately. In order to exploit the structure of the problem, we define an equivalent formulation by duplicating the variables and we consider the augmented Lagrangian of this latter formulation. Following the idea of the Alternating Direction Method of Multipliers (ADMM), we propose an algorithm where a two-blocks decomposition method is embedded within an augmented Lagrangian framework. The peculiarities of the proposed algorithm are the following: (1) the computation of the exact solution of a possibly nonconvex subproblem is not required; (2) the penalty parameter is iteratively updated once an approximated stationary point of the augmented Lagrangian is determined. Global convergence results are stated under mild assumptions and without requiring convexity of the objective function. Although the primary aim of the paper is theoretical, we perform numerical experiments on a nonconvex problem arising in machine learning, and the obtained results show the practical advantages of the proposed approach with respect to classical ADMM.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号