首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A new numerical technique is presented for solving the nonlinear ordinary boundary value problems. Theoretical and numerical results are presented. Comparison with Syam's results [1] will be given. These results indicate that our technique works more nicely and efficiently than his technique.  相似文献   

2.
3.
In this paper, a novel neural-network-based iterative adaptive dynamic programming (ADP) algorithm is proposed. It aims at solving the optimal control problem of a class of nonlinear discrete-time systems with control constraints. By introducing a generalized nonquadratic functional, the iterative ADP algorithm through globalized dual heuristic programming technique is developed to design optimal controller with convergence analysis. Three neural networks are constructed as parametric structures to facilitate the implementation of the iterative algorithm. They are used for approximating at each iteration the cost function, the optimal control law, and the controlled nonlinear discrete-time system, respectively. A simulation example is also provided to verify the effectiveness of the control scheme in solving the constrained optimal control problem.  相似文献   

4.
A numerical technique for solving nonlinear optimal control problems is introduced. The state and control variables are expanded in the Chebyshev series, and an algorithm is provided for approximating the system dynamics, boundary conditions, and performance index. Application of this method results in the transformation of differential and integral expressions into systems of algebraic or transcendental expressions in the Chebyshev coefficients. The optimum condition is obtained by applying the method of constrained extremum. For linear-quadratic optimal control problems, the state and control variables are determined by solving a set of linear equations in the Chebyshev coefficients. Applicability is illustrated with the minimum-time and maximum-radius orbit transfer problems  相似文献   

5.
In this paper, we propose a parallel iterative method for calculating the extreme eigenpair (the largest or smallest eigenvalue and corresponding eigenvector) of a large symmetric tridiagonal matrix. It is based upon a divide and repeated, rank-one modification technique. The rank-one modification with a parameter only changes one diagonal element of each submatrix. We present a basic theory for subdividing the extremal eigenpair problem and then prove several convergence theorems that show the convergence of the iteration scheme for any positive initial modification parameter and the asymptotical quadratic convergence rate. Some numerical experiments are given, which show the efficiency of the parallel algorithm.  相似文献   

6.
The iterative multistep method (IMS) introduced by Hyman (1978) for solving initial value problems in ordinary differential equations has the advantage of being able to offer a higher degree of accuracy than the Runge-Kutta formulas by continuing the iteration process. In this article, another IMS formula is developed based on the geometric means predictor-corrector formulas introduced by Sanugi and Evans (1989). A numerical example is provided that shows that this formula can be used as a competitive alternative to Hyman's IMS formula.  相似文献   

7.
A doubly iterative procedure for computing optimal controls in linear systems with convex cost functionals is presented. The procedure is based on an algorithm due to Gilbert [3] for minimizing a quadratic form on a convex set. Each step of the procedure makes use of an algorithm due to Neustadt and Paiewonsky [1] to solve a strictly linear optimal control problem.  相似文献   

8.
This paper presents a numerical method for solving a class of fractional optimal control problems (FOCPs). The fractional derivative in these problems is in the Caputo sense. The method is based upon the Legendre orthonormal polynomial basis. The operational matrices of fractional Riemann-Liouville integration and multiplication, along with the Lagrange multiplier method for the constrained extremum are considered. By this method, the given optimization problem reduces to the problem of solving a system of algebraic equations. By solving this system, we achieve the solution of the FOCP. Illustrative examples are included to demonstrate the validity and applicability of the new technique.  相似文献   

9.
10.
In this work a complete framework is presented for solving nonlinear constrained optimization problems, based on the line-up differential evolution (LUDE) algorithm which is proposed for solving unconstrained problems. Linear and/or nonlinear constraints are handled by embodying them in an augmented Lagrangian function, where the penalty parameters and multipliers are adapted as the execution of the algorithm proceeds. The LUDE algorithm maintains a population of solutions, which is continuously improved as it thrives from generation to generation. In each generation the solutions are lined up according to the corresponding objective function values. The position's in the line are very important, since they determine to what extent the crossover and the mutation operators are applied to each particular solution. The efficiency of the proposed methodology is illustrated by solving numerous unconstrained and constrained optimization problems and comparing it with other optimization techniques that can be found in the literature.  相似文献   

11.
Based on a new idea of successive approximation, this paper proposes a high-performance feedback neural network model for solving convex nonlinear programming problems. Differing from existing neural network optimization models, no dual variables, penalty parameters, or Lagrange multipliers are involved in the proposed network. It has the least number of state variables and is very simple in structure. In particular, the proposed network has better asymptotic stability. For an arbitrarily given initial point, the trajectory of the network converges to an optimal solution of the convex nonlinear programming problem under no more than the standard assumptions. In addition, the network can also solve linear programming and convex quadratic programming problems, and the new idea of a feedback network may be used to solve other optimization problems. Feasibility and efficiency are also substantiated by simulation examples.  相似文献   

12.
In this paper, we present a new family of two-step iterative methods for solving nonlinear equations. The order of convergence of the new family without memory is four requiring three functional evaluations, which implies that this family is optimal according to Kung and Traubs conjecture Kung and Traub (J Appl Comput Math 21:643–651, 1974). Further accelerations of convergence speed are obtained by varying a free parameter in per full iteration. This self-accelerating parameter is calculated by using information available from the current and previous iteration. The corresponding R-order of convergence is increased form 4 to $\frac{5+\sqrt{17}}{2}\approx 4.5616, \frac{5+\sqrt{21}}{2}\approx 4.7913$ and 5. The increase of convergence order is attained without any additional calculations so that the family of the methods with memory possesses a very high computational efficiency. Another advantage of the new methods is that they remove the severe condition $f^{\prime }(x)$ in a neighborhood of the required root imposed on Newtons method. Numerical comparisons are made to show the performance of our methods, as shown in the illustration examples.  相似文献   

13.
It is attempted to put forward a new multipoint iterative method of sixth-order convergence for approximating solutions of nonlinear systems of equations. It requires the evaluation of two vector-function and two Jacobian matrices per iteration. Furthermore, we use it as a predictor to derive a general multipoint method. Convergence error analysis, estimating computational complexity, numerical implementation and comparisons are given to verify applicability and validity for the proposed methods.  相似文献   

14.
《国际计算机数学杂志》2012,89(16):3483-3495
In the paper [S.P. Rui and C.X. Xu, A smoothing inexact Newton method for nonlinear complementarity problems, J. Comput. Appl. Math. 233 (2010), pp. 2332–2338], the authors proposed an inexact smoothing Newton method for nonlinear complementarity problems (NCP) with the assumption that F is a uniform P function. In this paper, we present a non-monotone inexact regularized smoothing Newton method for solving the NCP which is based on Fischer–Burmeister smoothing function. We show that the proposed algorithm is globally convergent and has a locally superlinear convergence rate under the weaker condition that F is a P 0 function and the solution of NCP is non-empty and bounded. Numerical results are also reported for the test problems, which show the effectiveness of the proposed algorithm.  相似文献   

15.
This paper discusses the basis for an efficient technique which transforms a general constrained nonlinear programming problem to a single unconstrained problem. The theoretical considerations are first presented. This is followed by a development of the algorithm. Lastly, an illustrated example is given to demonstrate the methodology.  相似文献   

16.
Analogue or hybrid computer methods for solving trajectory optimization problems usually require the solution of a two-point boundary value problem. A method of solving this problem is presented which does not require a good initial approximation to ensure convergence of the iteration. Analogue computer results are given with the method applied to a double integrator plant. Two different forms of performance index are considered.  相似文献   

17.
18.
19.
This paper presents a new method for solving two-dimensional wave problems in infinite domains. The method yields a solution that satisfies Sommerfeld's radiation condition, as required for the correct solution of infinite domains excited only locally. It is obtained by iterations. An infinite domain is first truncated by introducing an artificial finite boundary (β), on which some boundary conditions are imposed. The finite computational domain in each iteration is subjected to actual boundary conditions and to different (Dirichlet or Neumann) fictive boundary conditions on β.  相似文献   

20.
本文提出了一种求解非线性约束优化的全局最优的新方法—它是基于利用非线性互补函数和不断增加新的约束来重复解库恩-塔克条件的非线性方程组的新方法。因为库恩-塔克条件是非线性约束优化的必要条件,得到的解未必是非线性约束优化的全局最优解,为此,本文首次给出了通过利用该优化问题的先验知识,不断地增加约束来限制全局最优解范围的方法,一些仿真例子表明提出的方法和理论有效的,并且可行的。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号