首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
§1.引 言 许多大型科学与工程计算问题都归结为大型稀疏线性方程组的求解,因此,在高性能并行计算机高速发展的今天,面向并行计算环境研究大型稀疏线性方程组的高效并行算法显得尤为重要. 对于大型稀疏线性方程组 Ax=b, (1)  相似文献   

2.
In this paper, two modified spectral conjugate gradient methods which satisfy sufficient descent property are developed for unconstrained optimization problems. For uniformly convex problems, the first modified spectral type of conjugate gradient algorithm is proposed under the Wolfe line search rule. Moreover, the search direction of the modified spectral conjugate gradient method is sufficiently descent for uniformly convex functions. Furthermore, according to the Dai–Liao's conjugate condition, the second spectral type of conjugate gradient algorithm can generate some sufficient decent direction at each iteration for general functions. Therefore, the second method could be considered as a modification version of the Dai–Liao's algorithm. Under the suitable conditions, the proposed algorithms are globally convergent for uniformly convex functions and general functions. The numerical results show that the approaches presented in this paper are feasible and efficient.  相似文献   

3.
《国际计算机数学杂志》2012,89(10):1924-1942
ABSTRACT

A new subspace minimization conjugate gradient method based on tensor model is proposed and analysed. If the objective function is close to a quadratic, we construct a quadratic approximation model in a two-dimensional subspace to generate the search direction; otherwise, we construct a tensor model. It is remarkable that the search direction satisfies the sufficient descent property. We prove the global convergence of the proposed method under mild assumptions. Numerical comparisons are given with well-known CGOPT and CG_DESCENT and show that the proposed algorithm is very promising.  相似文献   

4.
This paper proposes a nonmonotone scaled conjugate gradient algorithm for solving large-scale unconstrained optimization problems, which combines the idea of scaled memoryless Broyden–Fletcher–Goldfarb–Shanno preconditioned conjugate gradient method with the nonmonotone technique. An attractive property of the proposed method is that the search direction always provides sufficient descent step at each iteration. This property is independent of the line search used. Under appropriate assumptions, the method is proven to possess global convergence for nonconvex smooth functions, and R-linear convergence for strongly convex functions. Preliminary numerical results and related comparisons show the efficiency of the proposed method in practical computation.  相似文献   

5.
The conjugate gradient method is an effective method for large-scale unconstrained optimization problems. Recent research has proposed conjugate gradient methods based on secant conditions to establish fast convergence of the methods. However, these methods do not always generate a descent search direction. In contrast, Y. Narushima, H. Yabe, and J.A. Ford [A three-term conjugate gradient method with sufficient descent property for unconstrained optimization, SIAM J. Optim. 21 (2011), pp. 212–230] proposed a three-term conjugate gradient method which always satisfies the sufficient descent condition. This paper makes use of both ideas to propose descent three-term conjugate gradient methods based on particular secant conditions, and then shows their global convergence properties. Finally, numerical results are given.  相似文献   

6.
We present a new approach to the construction of Domain Decomposition (DD) preconditioners for the conjugate gradient method applied to the solution of symmetric and positive definite finite element equations. The DD technique is based on a non-overlapping decomposition of the domain Ω intop subdomains connected later with thep processors of a MIMD computer. The DD preconditioner derived contains three block matrices which must be specified for the specific problem considered. One of the matrices is used for the transformation of the nodal finite element basis into the approximate discrete harmonic basis. The other two matrices are block preconditioners for the Dirichlet problems arising on the subdomains and for a modified Schur complement defined over all nodes on the coupling boundaries between the subdomains. The relative spectral condition number is estimated. Relations to the additive Schwarz method are discussed. In the second part of this paper, we will apply the results of this paper to two-dimensional, symmetric, second-order, elliptic boundary value problems and present numerical results performed on a transputer-network.  相似文献   

7.
We have recently developed a new conjugate gradient type method, the Generalized Polak-Ribière (GPR) method, for unconstrained minimization. It is based on search directions that are parallel to the Newton direction of the restriction of the objective function f on the two dimensional subspace span{?g p}, with p a suitable direction in span{? g,s?}, where g and s ? are the current gradient and previous search direction respectively. The new approach proved to be considerably more efficient than the original Polak-Ribière method.

In this paper, various implementations of the GPR method are compared with a currently available standard NAG software routine and also with the Nocedal, Buckley-LeNir and Shanno's limited memory algorithms. The results demonstrate the general effectiveness of the new algorithm. We also give a very brief discussion of extensions of the GPR method that generate search directions parallel to Newton directions in subspaces of dimension greater than two.  相似文献   

8.
目的:重构算法是压缩感知理论的关键问题之一,为了减少压缩感知方向追踪算法重建时间,并确保相对较高的重建精度,提出了一种非单调记忆梯度追踪(memory gradient pursuit,MGP)重构信号处理算法。方法:该算法建立在方向追踪框架下,采用正则化正交匹配策略实现了原子集的快速有效选择,对所选原子集利用非单调线性搜索准则确定步长,用记忆梯度算法计算更新方向,从而得到稀疏信号估计值。结果:该算法充分利用记忆梯度算法在Armijo线搜索下全局收敛性快速稳定的优点避免收敛到局部最优解,提升收敛效率。提出的MGP算法运行时间上比近似共轭梯度追踪算法缩短30%,可以精确重构一维信号和二维图像信号。结论:实验结果表明,该算法兼顾了效率和重建精度,有效提高信号重建性能,在相同测试条件下优于其他同类的重构算法。  相似文献   

9.
Conjugate gradient methods are a class of important methods for unconstrained optimization problems, especially when the dimension is large. In this paper, we study a class of modified conjugate gradient methods based on the famous LS conjugate gradient method, which produces a sufficient descent direction at each iteration and converges globally provided that the line search satisfies the strong Wolfe condition. At the same time, a new specific nonlinear conjugate gradient method is constructed. Our numerical results show that the new method is very efficient for the given test problems by comparing with the famous LS method, PRP method and CG-DESCENT method.  相似文献   

10.
The conjugate gradient (CG) method is one of the most popular methods for solving large-scale unconstrained optimization problems. In this paper, a new modified version of the CG formula that was introduced by Polak, Ribière, and Polyak is proposed for problems that are bounded below and have a Lipschitz-continuous gradient. The new parameter provides global convergence properties when the strong Wolfe-Powell (SWP) line search or the weak Wolfe-Powell (WWP) line search is employed. A proof of a sufficient descent condition is provided for the SWP line search. Numerical comparisons between the proposed parameter and other recent CG modifications are made on a set of standard unconstrained optimization problems. The numerical results demonstrate the efficiency of the proposed CG parameter compared with the other CG parameters.  相似文献   

11.
《国际计算机数学杂志》2012,89(9):1787-1798
ABSTRACT

Applying Powell symmetrical technique to the Liu–Storey conjugate gradient method, a partially symmetrical Liu–Storey conjugate gradient method is proposed and extended to solve nonlinear monotone equations with convex constraints, which satisfies the sufficient descent condition without any line search. By using some line searches, the global convergence is proved merely by assuming that the equations are Lipschitz continuous. Moreover, we prove the R-linear convergence rate of the proposed method with an additional assumption. Finally, compared with one existing method, the performance of the proposed method is showed by some numerical experiments on the given test problems.  相似文献   

12.
We present a Domain Decomposition non-iterative solver for the Poisson equation in a 3-D rectangular box. The solution domain is divided into mostly parallelepiped subdomains. In each subdomain a particular solution of the non-homogeneous equation is first computed by a fast spectral method. This method is based on the application of the discrete Fourier transform accompanied by a subtraction technique. For high accuracy the subdomain boundary conditions must be compatible with the specified inhomogeneous right hand side at the edges of all the interfaces. In the following steps the partial solutions are hierarchically matched. At each step pairs of adjacent subdomains are merged into larger units. In this paper we present the matching algorithm for two boxes which is a basis of the domain decomposition scheme. The hierarchical approach is convenient for parallelization and minimizes the global communication. The algorithm requires O(N 3:log:N) operations, where N is the number of grid points in each direction.  相似文献   

13.
14.
X.-Q. Jin  Y.-M. Wei  H.-S. Tam 《Calcolo》2005,42(2):105-113
Abstract Linear systems with M-matrices occur in a wide variety of areas including numerical partial differential equations, input-output production and growth models in economics, linear complementarity problems in operations research and Markov chains in stochastic analysis.In this paper, we propose a new preconditioner for solving a system with symmetric positive definite M-matrix by the preconditioned conjugate gradient (PCG) method. We show that our preconditioner increases the convergence rate of the PCG method and reduces the operation cost. Numerical results are given.  相似文献   

15.
In this paper we develop a new procedure for constructing a conjugate gradient direction equation. The new equation is a linear combination of two orthogonal vectors one of which is the negative gradient. This procedure is reduced to the method of Polak-Ribiere whenever line search is perfectly accurate. Otherwise, as reflected by our computational results, the method is more effective than any other conjugate gradient algorithm we have tested.  相似文献   

16.
We study block conjugate gradient methods in the context of continuation methods for bifurcation problems. By exploiting symmetry in certain semilinear elliptic differential equations, we can decompose the problems into small ones and reduce computational cost. On the other hand, the associated centered difference discretization matrices on the subdomains are nonsymmetric. We symmetrize them by using simple similarity transformations and discuss some basic properties concerning the discretization matrices. These properties allow the discrete pure mode solution paths branching from a multiple bifurcation point [0, λm,n] of the centered difference analogue of the original problem to be represented by the solution path branching from the first simple bifurcation point (0, μ1,1) of the counterpart of the reduced problem. Thus, the structure of a multiple bifurcation is preserved in discretization, while its treatment is reduced to those for simple bifurcation of problems on subdomains. In particular, we can adapt the continuation-Lanczos algorithm proposed in [1] to trace simple solution paths. Sample numerical results are reported.  相似文献   

17.
在自适应波束形成技术中,共轭梯度法是求解最优化问题的一种常用方法,最速下降法在不需要矩阵求逆的情况下,通过递推方式寻求加权矢量的最佳值。文中将最速下降法与共轭梯度法有机结合,构造出一种混合的优化算法。该方法在每次更新迭代过程中,采用负梯度下降搜索方向,最优自适应步长,既提高了共轭梯度算法的收敛速度,又解决了最速下降法在随相关矩阵特征值分散程度增加而下降缓慢的问题,具有收敛速度快,运算量低的特点。计算机仿真给出了五阵元均匀线阵的数字波束形成系统实例,分别从波束形成、误差收敛及最佳权值等方面与传统LMS 算法进行了比较分析,结果表明了该方法的可行性与有效性。  相似文献   

18.
In this paper, we propose an efficiently preconditioned Newton method for the computation of the leftmost eigenpairs of large and sparse symmetric positive definite matrices. A sequence of preconditioners based on the BFGS update formula is proposed, for the preconditioned conjugate gradient solution of the linearized Newton system to solve Au=q(u) u, q(u) being the Rayleigh quotient. We give theoretical evidence that the sequence of preconditioned Jacobians remains close to the identity matrix if the initial preconditioned Jacobian is so. Numerical results onto matrices arising from various realistic problems with size up to one million unknowns account for the efficiency of the proposed algorithm which reveals competitive with the Jacobi–Davidson method on all the test problems.  相似文献   

19.
A.E.  T.N. 《Neurocomputing》2009,72(13-15):3000
This article presents some efficient training algorithms, based on conjugate gradient optimization methods. In addition to the existing conjugate gradient training algorithms, we introduce Perry's conjugate gradient method as a training algorithm [A. Perry, A modified conjugate gradient algorithm, Operations Research 26 (1978) 26–43]. Perry's method has been proven to be a very efficient method in the context of unconstrained optimization, but it has never been used in MLP training. Furthermore, a new class of conjugate gradient (CG) methods is proposed, called self-scaled CG methods, which are derived from the principles of Hestenes–Stiefel, Fletcher–Reeves, Polak–Ribière and Perry's method. This class is based on the spectral scaling parameter introduced in [J. Barzilai, J.M. Borwein, Two point step size gradient methods, IMA Journal of Numerical Analysis 8 (1988) 141–148]. The spectral scaling parameter contains second order information without estimating the Hessian matrix. Furthermore, we incorporate to the CG training algorithms an efficient line search technique based on the Wolfe conditions and on safeguarded cubic interpolation [D.F. Shanno, K.H. Phua, Minimization of unconstrained multivariate functions, ACM Transactions on Mathematical Software 2 (1976) 87–94]. In addition, the initial learning rate parameter, fed to the line search technique, was automatically adapted at each iteration by a closed formula proposed in [D.F. Shanno, K.H. Phua, Minimization of unconstrained multivariate functions, ACM Transactions on Mathematical Software 2 (1976) 87–94; D.G. Sotiropoulos, A.E. Kostopoulos, T.N. Grapsa, A spectral version of Perry's conjugate gradient method for neural network training, in: D.T. Tsahalis (Ed.), Fourth GRACM Congress on Computational Mechanics, vol. 1, 2002, pp. 172–179]. Finally, an efficient restarting procedure was employed in order to further improve the effectiveness of the CG training algorithms. Experimental results show that, in general, the new class of methods can perform better with a much lower computational cost and better success performance.  相似文献   

20.
The non-overlapping Domain Decomposition (DD) method is a powerful tool for the coupling of Finite Element (FE) and Boundary Element (BE) methods. Moreover it provides a natural basis for constructing efficient parallel solvers. However, both, the efficiency and the robustness of DD–solvers depends heavily on the underlying decomposition of the domain of interest into subdomains. In this paper, we introduce the Adaptive Domain Decomposition Preprocessor ADDPre which realizes an automatic decomposition of the computational domain into p subdomains, where p is the number of processors to be used. We discuss the codes being involved, the algorithms which they are based on and the data–formats being used for describing the decomposition of the problem. Numerical examples, demonstrating the performance of the preprocessor are presented. Received: 20 January 1999 / Accepted: 12 May 1999  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号