首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Embedding feature selection in nonlinear support vector machines (SVMs) leads to a challenging non-convex minimization problem, which can be prone to suboptimal solutions. This paper develops an effective algorithm to directly solve the embedded feature selection primal problem. We use a trust-region method, which is better suited for non-convex optimization compared to line-search methods, and guarantees convergence to a minimizer. We devise an alternating optimization approach to tackle the problem efficiently, breaking it down into a convex subproblem, corresponding to standard SVM optimization, and a non-convex subproblem for feature selection. Importantly, we show that a straightforward alternating optimization approach can be susceptible to saddle point solutions. We propose a novel technique, which shares an explicit margin variable to overcome saddle point convergence and improve solution quality. Experiment results show our method outperforms the state-of-the-art embedded SVM feature selection method, as well as other leading filter and wrapper approaches.  相似文献   

2.
The efficiency of the classic alternating direction method of multipliers has been exhibited by various applications for large-scale separable optimization problems, both for convex objective functions and for nonconvex objective functions. While there are a lot of convergence analysis for the convex case, the nonconvex case is still an open problem and the research for this case is in its infancy. In this paper, we give a partial answer on this problem. Specially, under the assumption that the associated function satisfies the Kurdyka–?ojasiewicz inequality, we prove that the iterative sequence generated by the alternating direction method converges to a critical point of the problem, provided that the penalty parameter is greater than 2L, where L is the Lipschitz constant of the gradient of one of the involved functions. Under some further conditions on the problem's data, we also analyse the convergence rate of the algorithm.  相似文献   

3.
We proposed a polynomial approximation-based approach to solve a specific type of chance-constrained optimization problem that can be equivalently transformed into a convex programme. This type of chance-constrained optimization is in great needs of many applications and most solution techniques are problem-specific. Our key contribution is to provide an all-purpose solution approach through Monte Carlo and establish the linkage between our obtained optimal solution with the true optimal solution. Our approach performs well because: First, our method controls approximation errors for both the function value and its gradient (or subgradient) at the same time. This is the primary advantage of our method in comparison to the commonly used finite difference method. Second, the approximation error is well bounded in our method and, with a properly chosen algorithm, the total computational complexity will be polynomial. We also address issues associated with Monte Carlo, such as discontinuity and nondifferentiability of the function. Thanks to fast-advancing computer hardware, our method would be increasingly appealing to businesses, including small businesses. We present the numerical results to show that our method with Monte Carlo will yield high-quality, timely, and stable solutions.  相似文献   

4.
We describe the evolution of projection methods for solving convex feasibility problems to optimization methods when inconsistency arises, finally deriving from them, in a natural way, a general block method for convex constrained optimization. We present convergence results.  相似文献   

5.
We propose a convex optimization approach to solving the nonparametric regression estimation problem when the underlying regression function is Lipschitz continuous. This approach is based on the minimization of the sum of empirical squared errors, subject to the constraints implied by Lipschitz continuity. The resulting optimization problem has a convex objective function and linear constraints, and as a result, is efficiently solvable. The estimated function computed by this technique, is proven to convergeto the underlying regression function uniformly and almost surely, when the sample size grows to infinity, thus providing a very strong form of consistency. Wealso propose a convex optimization approach to the maximum likelihood estimation of unknown parameters in statistical models, where the parameters depend continuously on some observable input variables. For a number of classical distributional forms, the objective function in the underlying optimization problem is convex and the constraints are linear. These problems are, therefore, also efficiently solvable.  相似文献   

6.
In this article, we present a distributed resource and power allocation scheme for muRip]e-resource wireless cellular networks. The global optimization of multi-cell multi-link resource allocation problem is known to be NP-hard in the general case. We use Gibbs sampling based algorithms to perform a distributed optimization that would lead to the global optimum of the problem. The objective of this article is to show how to use the Gibbs sampling (GS) algorithm and its variant the Metropolis-Hastings (MH) algorithm. We also propose an enhanced method of the MH algorithm, based on a priori known target state distribution, which improves the convergence speed without increasing the complexity. Also, we study different temperature cooling strategies and investigate their impact on the network optimization and convergence speed. Simulation results have also shown the effectiveness of the proposed methods.  相似文献   

7.
In this paper, we consider a distributed convex optimization problem where the objective function is an average combination of individual objective function in multi‐agent systems. We propose a novel Newton Consensus method as a distributed algorithm to address the problem. This method utilises the efficient finite‐time average consensus method as an information fusion tool to construct the exact Newtonian global gradient direction. Under suitable assumptions, this strategy can be regarded as a distributed implementation of the classical standard Newton method and eventually has a quadratic convergence rate. The numerical simulation and comparison experiment show the superiority of the algorithm in convergence speed and performance.  相似文献   

8.
We present a method, based on a variational problem, for solving a non-smooth unconstrained optimization problem. We assume that the objective function is a Lipschitz continuous and a regular function. In this case the function of our variational problem is semismooth and a quasi-Newton method may be used to solve the variational problem. A convergence theorem for our algorithm and its discrete version is also proved. Preliminary computational results show that the method performs quite well and can compete with other methods.  相似文献   

9.
In this paper, a recurrent neural network for both convex and nonconvex equality-constrained optimization problems is proposed, which makes use of a cost gradient projection onto the tangent space of the constraints. The proposed neural network constructs a generically nonfeasible trajectory, satisfying the constraints only as t rarr infin. Local convergence results are given that do not assume convexity of the optimization problem to be solved. Global convergence results are established for convex optimization problems. An exponential convergence rate is shown to hold both for the convex case and the nonconvex case. Numerical results indicate that the proposed method is efficient and accurate.  相似文献   

10.
交替方向乘子法(ADMM)在机器学习问题中已有一些实际应用。针对大规模数据的处理和非光滑损失凸优化问题,将镜面下降方法引入原ADMM批处理算法,得到了一种新的改进算法,并在此基础上提出了一种求解非光滑损失凸优化问题的坐标优化算法。该算法具有操作简单、计算高效的特点。通过详尽的理论分析,证明了新算法的收敛性,在一般凸条件下其具有目前最优的收敛速度。最后与相关算法进行了对比,实验结果表明该算法在保证解稀疏性的同时拥有更快的收敛速度。  相似文献   

11.
火箭返回着陆问题高精度快速轨迹优化算法   总被引:2,自引:0,他引:2  
针对垂直起降可重复使用运载火箭子级返回着陆问题,提出一种高精度快速轨迹优化算法.算法将凸化技术与伪谱离散方法有机结合,将非凸、非线性优化问题转化为凸优化问题,进而充分利用凸优化求解快速性、收敛确定性以及伪谱法离散精度高的理论基础.在优化精度方面,建立了高保真优化模型,分析了发动机开机/终端时刻值设计对轨迹最优性的影响;采用flip-Radau谱法对连续最优控制问题进行离散,并利用伪谱法的独特离散时域映射,将开机和终端时刻设计为特殊控制变量,提高了优化结果的精度和最优性.在快速性方面,为利用凸优化方法求解非凸问题,基于一种新的信赖域更新策略,提出了改进序列凸化算法,减少了算法迭代次数,提高了算法收敛性能.数值实验验证了算法的有效性.高精度的优化结果和较高的计算速度,使得算法具有发展为在线最优制导方法的潜力.  相似文献   

12.
A convex optimization model predicts an output from an input by solving a convex optimization problem. The class of convex optimization models is large, and includes as special cases many well-known models like linear and logistic regression. We propose a heuristic for learning the parameters in a convex optimization model given a dataset of input-output pairs, using recently developed methods for differentiating the solution of a convex optimization problem with respect to its parameters. We describe three general classes of convex optimization models, maximum a posteriori (MAP) models, utility maximization models, and agent models, and present a numerical experiment for each.   相似文献   

13.
In this article, the denoising of smooth (H 1-regular) images is considered. To reach this objective, we introduce a simple and highly efficient over-relaxation technique for solving the convex, non-smooth optimization problems resulting from the denoising formulation. We describe the algorithm, discuss its convergence and present the results of numerical experiments, which validate the methods under consideration with respect to both efficiency and denoising capability. Several issues concerning the convergence of an Uzawa algorithm for the solution of the same problem are also discussed.  相似文献   

14.
分布式凸优化问题的目的是如何以分布式方法最小化局部智能体成本函数和,而现有分布式算法的控制步长选取依赖于系统智能体个数、伴随矩阵等全局性信息,有悖于分布式算法的初衷.针对此问题,提出一种基于非平衡有向网络的完全分布式凸优化算法(FDCOA).基于多智能体一致性理论和梯度跟踪技术,设计了一种非负余量迭代策略,使得FDCOA的控制步长收敛范围仅与智能体局部信息相关,进而实现控制步长的分布式设置.进一步分析了FDCOA在固定强连通和时变强连通网络情形下的收敛性.仿真结果表明本文构建的分布式控制步长选取方法对FDCOA在有向非平衡下的分布式凸优化问题是有效的.  相似文献   

15.
1 Introduction Optimization problems arise in a broad variety of scientific and engineering applica- tions. For many practice engineering applications problems, the real-time solutions of optimization problems are mostly required. One possible and very pr…  相似文献   

16.
We investigate constrained optimal control problems for linear stochastic dynamical systems evolving in discrete time. We consider minimization of an expected value cost subject to probabilistic constraints. We study the convexity of a finite-horizon optimization problem in the case where the control policies are affine functions of the disturbance input. We propose an expectation-based method for the convex approximation of probabilistic constraints with polytopic constraint function, and a Linear Matrix Inequality (LMI) method for the convex approximation of probabilistic constraints with ellipsoidal constraint function. Finally, we introduce a class of convex expectation-type constraints that provide tractable approximations of the so-called integrated chance constraints. Performance of these methods and of existing convex approximation methods for probabilistic constraints is compared on a numerical example.  相似文献   

17.
In this paper, a method to transform a discrete dynamic and nonconvex large-scale optimization problem into convex one is proposed by constructing the penalty terms of the part constraints. The hierarchical optimization algorithm is studied. The convergence and the application in the real world problem of the algorithm are also discussed.  相似文献   

18.
本文针对一类时间上关联的离散动态非凸大规模优化问题,提出了一种将部分约束作为罚项从而将非凸优化问题转化为凸优化问题的方法,研究了它的递阶优化算法,讨论了算法的收敛性及实际应用情况。  相似文献   

19.
We propose a new technique for minimization of convex functions not necessarily smooth. Our approach employs an equivalent constrained optimization problem and approximated linear programs obtained with cutting planes. At each iteration a search direction and a step length are computed. If the step length is considered “non serious”, a cutting plane is added and a new search direction is computed. This procedure is repeated until a “serious” step is obtained. When this happens, the search direction is a feasible descent direction of the constrained equivalent problem. To compute the search directions we employ the same formulation as in FDIPA, the Feasible Directions Interior Point Algorithm for constrained optimization. We prove global convergence of the present method. A set of numerical tests is described. The present technique was also successfully applied to the topology optimization of robust trusses. Our results are comparable to those obtained with other well known established methods.  相似文献   

20.
This paper studies the problem of minimizing the sum of convex functions that all share a common global variable, each function is known by one specific agent in the network. The underlying network topology is modeled as a time‐varying sequence of directed graphs, each of which is endowed with a non‐doubly stochastic matrix. We present a distributed method that employs gradient‐free oracles and push‐sum algorithms for solving this optimization problem. We establish the convergence by showing that the method converges to an approximate solution at the expected rate of , where T is the iteration counter. A numerical example is also given to illustrate the proposed method. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号