首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We investigate constrained optimal control problems for linear stochastic dynamical systems evolving in discrete time. We consider minimization of an expected value cost subject to probabilistic constraints. We study the convexity of a finite-horizon optimization problem in the case where the control policies are affine functions of the disturbance input. We propose an expectation-based method for the convex approximation of probabilistic constraints with polytopic constraint function, and a Linear Matrix Inequality (LMI) method for the convex approximation of probabilistic constraints with ellipsoidal constraint function. Finally, we introduce a class of convex expectation-type constraints that provide tractable approximations of the so-called integrated chance constraints. Performance of these methods and of existing convex approximation methods for probabilistic constraints is compared on a numerical example.  相似文献   

2.
We describe general heuristics to approximately solve a wide variety of problems with convex objective and decision variables from a non-convex set. The heuristics, which employ convex relaxations, convex restrictions, local neighbour search methods, and the alternating direction method of multipliers, require the solution of a modest number of convex problems, and are meant to apply to general problems, without much tuning. We describe an implementation of these methods in a package called NCVX, as an extension of CVXPY, a Python package for formulating and solving convex optimization problems. We study several examples of well known non-convex problems, and show that our general purpose heuristics are effective in finding approximate solutions to a wide variety of problems.  相似文献   

3.
Control applications of nonlinear convex programming   总被引:2,自引:0,他引:2  
Since 1984 there has been a concentrated effort to develop efficient interior-point methods for linear programming (LP). In the last few years researchers have begun to appreciate a very important property of these interior-point methods (beyond their efficiency for LP): they extend gracefully to nonlinear convex optimization problems. New interior-point algorithms for problem classes such as semidefinite programming (SDP) or second-order cone programming (SOCP) are now approaching the extreme efficiency of modern linear programming codes. In this paper we discuss three examples of areas of control where our ability to efficiently solve nonlinear convex optimization problems opens up new applications. In the first example we show how SOCP can be used to solve robust open-loop optimal control problems. In the second example, we show how SOCP can be used to simultaneously design the set-point and feedback gains for a controller, and compare this method with the more standard approach. Our final application concerns analysis and synthesis via linear matrix inequalities and SDP.  相似文献   

4.
Minimum achievable output variance (MAOV) is a common benchmark for control performance assessment. Finding the MAOV of proportional–integral–derivative (PID) control systems is computationally expensive due to the inherent non-convexity of the associated optimization problem. We present in this paper a new framework for computing the MAOV of PID control systems. The problem of estimating the MAOV of a PID control system is novelly formulated as a convex program with an additional non-convex constraint. The non-convex constraint is linearized and handled by the penalty approach. Based on this, a low-complexity algorithm, which relies on the iterative convex programming technique, is developed to solve the MAOV problem. The new algorithm is proved to be convergent. We show via numerical examples that the new approach yields close-to-optimal solutions that are better than (or as good as) the solutions generated by the existing methods.  相似文献   

5.
We propose a new family of subgradient- and gradient-based methods which converges with optimal complexity for convex optimization problems whose feasible region is simple enough. This includes cases where the objective function is non-smooth, smooth, have composite/saddle structure, or are given by an inexact oracle model. We unified the way of constructing the subproblems which are necessary to be solved at each iteration of these methods. This permitted us to analyse the convergence of these methods in a unified way compared to previous results which required different approaches for each method/algorithm. Our contribution rely on two well-known methods in non-smooth convex optimization: the mirror-descent method (MDM) by Nemirovski-Yudin and the dual-averaging method by Nesterov. Therefore, our family of methods includes them and many other methods as particular cases. For instance, the proposed family of classical gradient methods and its accelerations generalize Devolder et al.'s, Nesterov's primal/dual gradient methods, and Tseng's accelerated proximal gradient methods. Also our family of methods can partially become special cases of other universal methods, too. As an additional contribution, the novel extended MDM removes the compactness assumption of the feasible region and the fixation of the total number of iterations which is required by the original MDM in order to attain the optimal complexity.  相似文献   

6.
在最大边缘线性分类器和闭凸包收缩思想的基础上,针对二分类问题,通过闭凸包收缩技术,将线性不可分问题转化为线性可分问题。将上述思想推广到解决多分类问题中,提出了一类基于闭凸包收缩的多分类算法。该方法几何意义明确,在一定程度上克服了以往多分类方法目标函数过于复杂的缺点,并利用核思想将其推广到非线性分类问题上。  相似文献   

7.
We show that the problem of designing a quantum information processing error correcting procedure can be cast as a bi-convex optimization problem, iterating between encoding and recovery, each being a semidefinite program. For a given encoding operator the problem is convex in the recovery operator. For a given method of recovery, the problem is convex in the encoding scheme. This allows us to derive new codes that are locally optimal. We present examples of such codes that can handle errors which are too strong for codes derived by analogy to classical error correction techniques.  相似文献   

8.
We propose a new technique for minimization of convex functions not necessarily smooth. Our approach employs an equivalent constrained optimization problem and approximated linear programs obtained with cutting planes. At each iteration a search direction and a step length are computed. If the step length is considered “non serious”, a cutting plane is added and a new search direction is computed. This procedure is repeated until a “serious” step is obtained. When this happens, the search direction is a feasible descent direction of the constrained equivalent problem. To compute the search directions we employ the same formulation as in FDIPA, the Feasible Directions Interior Point Algorithm for constrained optimization. We prove global convergence of the present method. A set of numerical tests is described. The present technique was also successfully applied to the topology optimization of robust trusses. Our results are comparable to those obtained with other well known established methods.  相似文献   

9.
The aim of this paper is to design an efficient multigrid method for constrained convex optimization problems arising from discretization of some underlying infinite dimensional problems. Due to problem dependency of this approach, we only consider bound constraints with (possibly) a single equality constraint. As our aim is to target large-scale problems, we want to avoid computation of second derivatives of the objective function, thus excluding Newton-like methods. We propose a smoothing operator that only uses first-order information and study the computational efficiency of the resulting method.  相似文献   

10.
Discretization of continuous input functions into piecewise constant or piecewise linear approximations is needed in many mathematical modeling problems. It has been shown that choosing the length of the piecewise segments adaptively based on data samples leads to improved accuracy of the subsequent processing such as classification. Traditional approaches are often tied to a particular classification model which results in local greedy optimization of a criterion function. This paper proposes a technique for learning the discretization parameters along with the parameters of a decision function in a convex optimization of the true objective. The general formulation is applicable to a wide range of learning problems. Empirical evaluation demonstrates that the proposed convex algorithms yield models with fewer number of parameters with comparable or better accuracy than the existing methods.  相似文献   

11.
We propose a general recurrent neural-network (RNN) model for nonlinear optimization over a nonempty compact convex subset which includes the bound subset and spheroid subset as special cases. It is shown that the compact convex subset is a positive invariant and attractive set of the RNN system and that all the network trajectories starting from the compact convex subset converge to the equilibrium set of the RNN system. The above equilibrium set of the RNN system coincides with the optimum set of the minimization problem over the compact convex subset when the objective function is convex. The analysis of these qualitative properties for the RNN model is conducted by employing the properties of the projection operator of Euclidean space onto the general nonempty closed convex subset. A numerical simulation example is also given to illustrate the qualitative properties of the proposed general RNN model for solving an optimization problem over various compact convex subsets.  相似文献   

12.
The orthogonal packing of rectangular items in an arbitrary convex region is considered in this work. The packing problem is modeled as the problem of deciding for the feasibility or infeasibility of a set of nonlinear equality and inequality constraints. A procedure based on nonlinear programming is introduced and numerical experiments which show that the new procedure is reliable are exhibited.

Scope and purpose

We address the problem of packing orthogonal rectangles within an arbitrary convex region. We aim to show that smooth nonlinear programming models are a reliable alternative for packing problems and that well-known general-purpose methods based on continuous optimization can be used to solve the models. Numerical experiments illustrate the capabilities and limitations of the approach.  相似文献   

13.
Multivariate adaptive regression splines (MARS) provide a flexible statistical modeling method that employs forward and backward search algorithms to identify the combination of basis functions that best fits the data and simultaneously conduct variable selection. In optimization, MARS has been used successfully to estimate the unknown functions in stochastic dynamic programming (SDP), stochastic programming, and a Markov decision process, and MARS could be potentially useful in many real world optimization problems where objective (or other) functions need to be estimated from data, such as in surrogate optimization. Many optimization methods depend on convexity, but a non-convex MARS approximation is inherently possible because interaction terms are products of univariate terms. In this paper a convex MARS modeling algorithm is described. In order to ensure MARS convexity, two major modifications are made: (1) coefficients are constrained, such that pairs of basis functions are guaranteed to jointly form convex functions and (2) the form of interaction terms is altered to eliminate the inherent non-convexity. Finally, MARS convexity can be achieved by the fact that the sum of convex functions is convex. Convex-MARS is applied to inventory forecasting SDP problems with four and nine dimensions and to an air quality ground-level ozone problem.  相似文献   

14.
Recently, it has been shown that the regret of the Follow the Regularized Leader (FTRL) algorithm for online linear optimization can be bounded by the total variation of the cost vectors rather than the number of rounds. In this paper, we extend this result to general online convex optimization. In particular, this resolves an open problem that has been posed in a number of recent papers. We first analyze the limitations of the FTRL algorithm as proposed by Hazan and Kale (in Machine Learning 80(2–3), 165–188, 2010) when applied to online convex optimization, and extend the definition of variation to a gradual variation which is shown to be a lower bound of the total variation. We then present two novel algorithms that bound the regret by the gradual variation of cost functions. Unlike previous approaches that maintain a single sequence of solutions, the proposed algorithms maintain two sequences of solutions that make it possible to achieve a variation-based regret bound for online convex optimization. To establish the main results, we discuss a lower bound for FTRL that maintains only one sequence of solutions, and a necessary condition on smoothness of the cost functions for obtaining a gradual variation bound. We extend the main results three-fold: (i) we present a general method to obtain a gradual variation bound measured by general norm; (ii) we extend algorithms to a class of online non-smooth optimization with gradual variation bound; and (iii) we develop a deterministic algorithm for online bandit optimization in multipoint bandit setting.  相似文献   

15.
In this paper we propose a parallel coordinate descent algorithm for solving smooth convex optimization problems with separable constraints that may arise, e.g. in distributed model predictive control (MPC) for linear network systems. Our algorithm is based on block coordinate descent updates in parallel and has a very simple iteration. We prove (sub)linear rate of convergence for the new algorithm under standard assumptions for smooth convex optimization. Further, our algorithm uses local information and thus is suitable for distributed implementations. Moreover, it has low iteration complexity, which makes it appropriate for embedded control. An MPC scheme based on this new parallel algorithm is derived, for which every subsystem in the network can compute feasible and stabilizing control inputs using distributed and cheap computations. For ensuring stability of the MPC scheme, we use a terminal cost formulation derived from a distributed synthesis. Preliminary numerical tests show better performance for our optimization algorithm than other existing methods.  相似文献   

16.
We offer an efficient approach based on difference of convex functions (DC) optimization for self-organizing maps (SOM). We consider SOM as an optimization problem with a nonsmooth, nonconvex energy function and investigated DC programming and DC algorithm (DCA), an innovative approach in nonconvex optimization framework to effectively solve this problem. Furthermore an appropriate training version of this algorithm is proposed. The numerical results on many real-world datasets show the efficiency of the proposed DCA based algorithms on both quality of solutions and topographic maps.  相似文献   

17.
In this paper, various methods based on convex approximation schemes are discussed, that have demonstrated strong potential for efficient solution of structural optimization problems. First, theconvex linearization method (CONLIN) is briefly described, as well as one of its recent generalizations, themethod of moving asymptotes (MMA). Both CONLIN and MMA can be interpreted as first order convex approximation methods, that attempt to estimate the curvature of the problem functions on the basis of semi-empirical rules. Attention is next directed toward methods that use diagonal second derivatives in order to provide a sound basis for building up high quality explicit approximations of the behaviour constraints. In particular, it is shown how second order information can be effectively used without demanding a prohibitive computational cost. Various first and second order approaches are compared by applying them to simple problems that have a closed form solution.  相似文献   

18.
Determining the minimum distance between convex objects is a problem that has been solved using many different approaches. On the other hand, computing the minimum distance between combinations of convex and concave objects is known to be a more complicated problem. Most methods propose to partition the concave object into convex subobjects and then solve the convex problem between all possible subobject combinations. This can add a large computational expense to the solution of the minimum distance problem. In this paper, an optimization-based approach is used to solve the concave problem without the need for partitioning concave objects into convex pieces. Since the optimization problem is no longer unimodal (i.e., has more than one local minimum point), global optimization techniques are used. Simulated Annealing (SA) and Genetic Algorithms (GAs) are used to solve the concave minimum distance problem. In order to reduce the computational expense, it is proposed to replace the objects' geometry by a set of points on the surface of each body. This reduces the problem to an unconstrained combinatorial optimization problem, where the combination of points (one on the surface of each body) that minimizes the distance will be the solution. Additionally, if the surface points are set as the nodes of a surface mesh, it is possible to accelerate the convergence of the global optimization algorithm by using a hill-climbing local optimization algorithm. Some examples using these novel approaches are presented.  相似文献   

19.
In geometric modeling, surface parameterization plays an important role for converting triangle meshes to spline surfaces. Parameterization will introduce distortions. Conventional parameterization methods emphasize on angle-preservation, which may induce huge area distortions and cause large spline fitting errors and trigger numerical instabilities.To overcome this difficulty, this work proposes a novel area-preserving parameterization method, which is based on an optimal mass transport theory and convex geometry. Optimal mass transport mapping is measure-preserving and minimizes the transportation cost. According to Brenier’s theorem, for quadratic distance transportation costs, the optimal mass transport map is the gradient of a convex function. The graph of the convex function is a convex polyhedron with prescribed normal and areas. The existence and the uniqueness of such a polyhedron have been proved by the Minkowski-Alexandrov theorem in convex geometry. This work gives an explicit method to construct such a polyhedron based on the variational principle, and formulates the solution to the optimal transport map as the unique optimum of a convex energy. In practice, the energy optimization can be carried out using Newton’s method, and each iteration constructs a power Voronoi diagram dynamically. We tested the proposal algorithms on 3D surfaces scanned from real life. Experimental results demonstrate the efficiency and efficacy of the proposed variational approach for the optimal transport map.  相似文献   

20.
This paper presents the FOM MATLAB toolbox for solving convex optimization problems using first-order methods. The diverse features of the eight solvers included in the package are illustrated through a collection of examples of different nature.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号