分布式一致性最优化的梯度算法与收敛分析 |
| |
引用本文: | 梁舒,彭开香. 分布式一致性最优化的梯度算法与收敛分析[J]. 工程科学学报, 2020, 42(4): 434-440. DOI: 10.13374/j.issn2095-9389.2019.09.05.005 |
| |
作者姓名: | 梁舒 彭开香 |
| |
作者单位: | 北京科技大学自动化学院工业过程知识自动化教育部重点实验室,北京 100083 |
| |
基金项目: | 国家自然科学基金资助项目(61903027,61873024) |
| |
摘 要: | 研究了多智能体网络中受集合约束的一致性最优化问题,提出了基于原始–对偶梯度的定步长分布式算法。算法中包括步长在内的参数会影响收敛性,需要先进行收敛分析,再根据收敛条件设置合适的参数。本文首先针对一般的定步长迭代格式,提出一种基于李雅普诺夫函数的收敛分析范式,它类似于一般微分方程关于李雅普诺夫稳定的分析方法。然后,针对所考虑的分布式梯度算法,构造了合适的李雅普诺夫函数,并根据收敛条件得到了算法参数设定范围,避免了繁冗复杂的分析论证。本文提出的理论与方法也为其他类型的分布式算法提供了一个框架性、系统性的论证方法。
|
关 键 词: | 分布式优化 原始–对偶 收敛分析 李雅普诺夫函数 一致性 |
收稿时间: | 2019-09-05 |
Distributed gradient-based consensus optimization algorithm and convergence analysis |
| |
Affiliation: | Key Laboratory of Knowledge Automation for Industrial Processes of Ministry of Education, School of Automation, University of Science and Technology Beijing, Beijing 100083, China |
| |
Abstract: | A distributed optimization problem is cooperatively solved by a network of agents, which have significant applications in many science and engineering fields, such as metallurgical engineering. For complex industrial processes with multiple-level characteristics, varying working conditions, and long processes, numerous optimization decision-making micro and macro control problems, such as product quality control, production planning, scheduling, and energy comprehensive deployment, are encountered. The theory and method of distributed optimization are keys to promoting the strategic decision-making of the integration of industrialization and new-generation industrial revolution. Their development enhances the ability to deal with large-scale and complex problems of big data, which have important practical value and economic benefits. In this study, consensus optimization with set constraints in multi-agent networks was explored. A distributed algorithm with a fixed step size was proposed on the basis of a primal-dual gradient scheme. Parameters such as step size affect the convergence of the algorithm. As such, convergence should be analyzed first, and appropriate parameters should be subsequently set in accordance with convergence conditions. Existing works have constructed different Lyapunov functions by exploiting the specific iteration scheme of this algorithm and analyzing convergence. Conversely, a convergence analysis paradigm based on a Lyapunov function was proposed in this study for general fixed step size iteration schemes, which were similar to the analysis method of Lyapunov convergence for general differential equations. A suitable Lyapunov function was constructed for the distributed gradient algorithm, and a parameter setting range was obtained in accordance with the convergence conditions. The proposed method avoids the tedious and complicated analysis of algorithm convergence and parameter assignment. The theory and method presented in this study also provide a framework and systematic demonstration method for other types of distributed algorithms and may be regarded as future directions of distributed optimization. |
| |
Keywords: | |
本文献已被 CNKI 等数据库收录! |
| 点击此处可从《工程科学学报》浏览原始摘要信息 |
|
点击此处可从《工程科学学报》下载免费的PDF全文 |
|