首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
3.
A nonlinear convex programming problem is solved by methods of interval arithmetic which take into account the input errors and the round-off errors. The problem is reduced to the solution of a nonlinear parameter dependent system of equations. Moreover error estimations are developed for special problems with uniformly convex cost functions.  相似文献   

4.
5.
6.
7.
8.
9.
10.
In this paper we propose a long-step logarithmic barrier function method for convex quadratic programming with linear equality constraints. After a reduction of the barrier parameter, a series of long steps along projected Newton directions are taken until the iterate is in the vicinity of the center associated with the current value of the barrier parameter. We prove that the total number of iterations isO(nL) orO(nL), depending on how the barrier parameter is updated.On leave from Eötvös University, Budapest and partially supported by OTKA 2116.  相似文献   

11.
12.
13.
Quadratic knapsack problem (QKP) has a central role in integer and combinatorial optimization, while efficient algorithms to general QKPs are currently very limited. We present an approximate dynamic programming (ADP) approach for solving convex QKPs where variables may take any integer value and all coefficients are real numbers. We approximate the function value using (a) continuous quadratic programming relaxation (CQPR), and (b) the integral parts of the solutions to CQPR. We propose a new heuristic which adaptively fixes the variables according to the solution of CQPR. We report computational results for QKPs with up to 200 integer variables. Our numerical results illustrate that the new heuristic produces high-quality solutions to large-scale QKPs fast and robustly.  相似文献   

14.
Feng  Jiqiang  Qin  Sitian  Shi  Fengli  Zhao  Xiaoyue 《Neural computing & applications》2018,30(11):3399-3408

In this paper, a recurrent neural network with a new tunable activation is proposed to solve a kind of convex quadratic bilevel programming problem. It is proved that the equilibrium point of the proposed neural network is stable in the sense of Lyapunov, and the state of the proposed neural network converges to an equilibrium point in finite time. In contrast to the existing related neurodynamic approaches, the proposed neural network in this paper is capable of solving the convex quadratic bilevel programming problem in finite time. Moreover, the finite convergence time can be quantitatively estimated. Finally, two numerical examples are presented to show the effectiveness of the proposed recurrent neural network.

  相似文献   

15.
A method for simultaneous solution of the primal and dual problems is described. The component functions are assumed nondifferentiable, and the minimand function is not necessarily defined outside the feasible region.Translated from Kibernetika, No. 2, pp. 91–97, March–April, 1990.  相似文献   

16.
An implementable method is proposed for simultaneous solution of the primal and dual problems of convex programming.Translated from Kibernetika, No. 2, pp. 54–64, March–April, 1989.  相似文献   

17.
Conclusion Let us return to the claim that we made at the beginning: given the existing level of computers, computational mathematics must not ignore new opportunities for finding results that have been impossible until very recently. In our view, the proposed mixed method is consistent with technological progress: all known problems have been solved in acceptable time, and not in a single case has the method failed to produce a solution. Although we can always dispute claims of the kind made above, our calculations nevertheless convince that the algorithms proposed in this article may be used to solve a fairly broad class of problems, and in particular semidefinite programming problems [7, 17]. Translated from Kibernetika i Sistemnyi Analiz, No. 4, pp. 121–134, July–August, 1998.  相似文献   

18.
19.
20.
In this paper, a feedback neural network model is proposed for solving a class of convex quadratic bi-level programming problems based on the idea of successive approximation. Differing from existing neural network models, the proposed neural network has the least number of state variables and simple structure. Based on Lyapunov theories, we prove that the equilibrium point sequence of the feedback neural network can approximately converge to an optimal solution of the convex quadratic bi-level problem under certain conditions, and the corresponding sequence of the function value approximately converges to the optimal value of the convex quadratic bi-level problem. Simulation experiments on three numerical examples and a portfolio selection problem are provided to show the efficiency and performance of the proposed neural network approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号