首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
本文给出了两种新的解非线性方程组的迭代方法,证明了它们具有四阶收敛性,通过数值实例对几种不同的迭代方法和本文提出的两种新方法进行了分析比较,说明了本文方法的有效性.  相似文献   

2.
探讨了如何高效求解非Hermitian正定线性方程组,提出了一种外推的广义Hermitian和反Hermitian (EGHSS) 迭代方法。首先,根据矩阵的广义Hermitian和反Hermitian分裂,构造出了一种新的非对称的二步迭代格式。接着,理论分析了新方法的收敛性,并给出了新方法收敛的充要条件。数值实验结果表明,在处理某些问题时,EGHSS迭代方法比GHSS迭代方法和EHSS迭代方法更有效。  相似文献   

3.
波形松弛(WR)方法是求常微分方程近似解的数值方法,对它的研究多集中于收敛性,极少见到稳定性研究报告,而不稳定的数值方法是没有意义的.借鉴常微分方程数值方法绝对稳定的思想,提出了WR方法的绝对稳定定义.分析连续基本WR方法和基于Θ方法的离散基本WR方法的稳定性,给出了连续和离散WR方法的绝对稳定条件,以及离散WR方法的压缩条件.对于WR方法,分裂函数和数值方法(用于离散连续WR方法)的选择是两个基础问题.论文结论部分地揭示了WR方法的稳定性与分裂函数和数值方法的关系.  相似文献   

4.
基于利用修正HS方法提高算法效率和利用DY方法保证算法的全局收敛性等思想,分别在不同条件下提出两种新的混合共轭梯度法求解大规模无约束优化问题.在一般Wlolfe线搜索下不需给定下降条件,证明了两个算法的全局收敛性,数值实验表明所提出算法的有效性,特别对于某些大规模无约束优化问题,数值表现较好.  相似文献   

5.
何鹏  陶建华 《自动化学报》2009,35(12):1568-1573
提出了一种描述自然图像小尺度模式的新方法. 其中心思想是将小尺度信息的建模问题转化为图像区域上Sobolev空间的序列能量极小化问题, 进而转化为序列特征值问题. 从数学上分析了小尺度模式的多层次结构及模型的收敛性. 本文还提出一种新的自适应多层次化图像表示方法, 并可应用于图像合成及视觉感知等方面. 从数值计算的角度上, 通过稀疏对称矩阵特征值分解可方便地获得不同层次的小尺度模式.  相似文献   

6.
本文主要研究了一类随机分数阶微分方程隐式Euler方法的弱收敛性与弱稳定性.首先构造了数值求解随机分数阶微分方程的隐式Euler方法,然后证明该方法是弱稳定的和1阶弱收敛的,文末给出的数值算例验证了所获得的理论结果的正确性.  相似文献   

7.
本文针对多处理机系统构造了一类新的并行Runge—kutta公式,证明公式的稳定性和收敛性,给出它的稳定区域,数值例子表明,该公式可以有效地求解常微分方程初值问题。  相似文献   

8.
本文针对Riesz回火分数阶平流-扩散方程,采用隐式中点方法离散一阶时间偏导数,用修正的二阶Lubich回火差分算子逼近Riesz空间回火分数阶偏导数,并对平流项采用中心差商进行离散,构造出新的数值方法,获得了数值方法的稳定性和收敛性,该方法的收敛阶在空间和时间方向均达到二阶精度.数值试验验证了数值方法的有效性.  相似文献   

9.
本文在校正的DFP方法基础上,提出了一个新的三项梯度下降算法.该算法能够保证在每一步迭代中具有充分下降性,并在强Wolfe线搜索条件下对一般函数具有全局收敛性.数值试验表明它对给定的问题是非常有效的、稳定的.  相似文献   

10.
本文研究了一类带乘性噪声随机分数阶微分方程数值方法的弱收敛性和弱稳定性.首先基于It公式和Riemann-Liouville分数阶导数构造了求解带乘性噪声随机分数阶微分方程的数值方法,然后证明当分数阶α满足0α1时,该方法是1-α阶弱收敛的和弱稳定的,文末数值试验的结果验证了理论结果的正确性.  相似文献   

11.
In this paper a general class of fast learning algorithms for feedforward neural networks is introduced and described. The approach exploits the separability of each layer into linear and nonlinear blocks and consists of two steps. The first step is the descent of the error functional in the space of the outputs of the linear blocks (descent in the neuron space), which can be performed using any preferred optimization strategy. In the second step, each linear block is optimized separately by using a least squares (LS) criterion. To demonstrate the effectiveness of the new approach, a detailed treatment of a gradient descent in the neuron space is conducted. The main properties of this approach are the higher speed of convergence with respect to methods that employ an ordinary gradient descent in the weight space backpropagation (BP), better numerical conditioning, and lower computational cost compared to techniques based on the Hessian matrix. The numerical stability is assured by the use of robust LS linear system solvers, operating directly on the input data of each layer. Experimental results obtained in three problems are described, which confirm the effectiveness of the new method.  相似文献   

12.
This article presents some efficient training algorithms, based on first-order, second-order, and conjugate gradient optimization methods, for a class of convolutional neural networks (CoNNs), known as shunting inhibitory convolution neural networks. Furthermore, a new hybrid method is proposed, which is derived from the principles of Quickprop, Rprop, SuperSAB, and least squares (LS). Experimental results show that the new hybrid method can perform as well as the Levenberg-Marquardt (LM) algorithm, but at a much lower computational cost and less memory storage. For comparison sake, the visual pattern recognition task of face/nonface discrimination is chosen as a classification problem to evaluate the performance of the training algorithms. Sixteen training algorithms are implemented for the three different variants of the proposed CoNN architecture: binary-, Toeplitz- and fully connected architectures. All implemented algorithms can train the three network architectures successfully, but their convergence speed vary markedly. In particular, the combination of LS with the new hybrid method and LS with the LM method achieve the best convergence rates in terms of number of training epochs. In addition, the classification accuracies of all three architectures are assessed using ten-fold cross validation. The results show that the binary- and Toeplitz-connected architectures outperform slightly the fully connected architecture: the lowest error rates across all training algorithms are 1.95% for Toeplitz-connected, 2.10% for the binary-connected, and 2.20% for the fully connected network. In general, the modified Broyden-Fletcher-Goldfarb-Shanno (BFGS) methods, the three variants of LM algorithm, and the new hybrid/LS method perform consistently well, achieving error rates of less than 3% averaged across all three architectures.  相似文献   

13.
有序子集最小二乘OS—LS图像重建迭代算法   总被引:1,自引:0,他引:1       下载免费PDF全文
为推导一种新的快速图像迭代重建方法,首先将有序子集(ordered subsets,OS)技术应用到最小二乘图像重建迭代算法(least square reconstruction,LS);然后对仿真Phantom模型数据和实际医用正电子发射断层成像仪(PET)数据进行重建,并研究了在不同子集划分下的重建结果,同时分析比较了不同子集的选取对OS—LS重建罔像质量以及重建收敛速度的影响。重建结果表明,这种基于有序子集的最小二乘图像重建迭代算法(OS—LS)具有较高的重建图像质量和较短的计算时间,相对于传统LS算法的重建,OS—LS的收敛速度加速了约L倍(L为子集个数).其重建图像质量也好于传统的滤波反投影(FBP)方法的重建.町应用在PET图像重建中。  相似文献   

14.
A family of new iteration methods without employing derivatives is proposed in this paper. We have proved that these new methods are quadratic convergence. Their efficiency is demonstrated by numerical experiments. The numerical experiments show that our algorithms are comparable to well-known methods of Newton and Steffensen. Furthermore, combining the new method with bisection method we construct another new high-order iteration method with nice asymptotic convergence properties of the diameters (bn − an).  相似文献   

15.
提出一种基于幂基函数变步长神经网络算法求解数值积分的新方法,证明了该算法的收敛性以及数值积分的求解定理及推论。通过典型数值积分算例,计算机仿真实验表明,提出的基于幂基函数变步长神经网络积分算法相比传统的数值积分方法,具有计算精度高、收敛速度快、算法稳定等特点。  相似文献   

16.
针对系数矩阵A为H-矩阵的线性方程组Ax=b,引入了预条件矩阵I+S_α~β,通过对系数矩阵施行初等行变换,提出了求解线性方程组Ax=b的一种新的预条件Gauss-Seidel方法.论文中首先证明了若A为H-矩阵,则(I+S_α~β)A仍然是H-矩阵;其次,以定理的形式给出了新的预条件Gauss-Seidel方法收敛的充分条件,即给出了为保证新的预条件Gauss-Seidel方法收敛时参数所需满足的条件;然后从理论上证明了新的预条件Gauss-Seidel迭代方法较经典的Gauss-Seidel迭代方法收敛速度快,论文中提出的新的预条件Gauss-Seidel迭代方法推广了文[1-2]中提出的预条件方法;最后又通过数值算例说明了新的预条件Gauss-Seidel迭代方法的有效性.  相似文献   

17.
In conventional least square (LS) regressions for nonlinear problems, it is not easy to obtain analytical derivatives with respect to target parameters that comprise a set of normal equations. Even if the derivatives can be obtained analytically or numerically, one must take care to choose the correct initial values for the iterative procedure of solving an equation, because some undesired, locally optimized solutions may also satisfy the normal equation. In the application of genetic algorithms (GAs) for nonlinear LS, it is not necessary to use normal equations, and a GA is also capable of avoiding localized optima. However, convergence of population and reliability of solutions depends on the initial domain of parameters, similarly to the choice of initial values in the above mentioned method using the normal equation. To overcome this disadvantage of applying GAs for nonlinear LS, we propose to use an adaptive domain method (ADM) in which the parameter domain can change dynamically by using several real-coded GAs with short lifetimes. Through an example problem, we demonstrate improvements in terms of both the convergence and the reliability by ADM. A further merit in the proposed method is that it does not require any specialized knowledge about GAs or their tuning. Therefore, the nonlinear LS by ADM with GAs are accessible to general scientists for various applications in many fields  相似文献   

18.
针对无线传感器网络(WSN)中存在定位精度不足的问题,提出了一种基于RSSI差分校正的最小二乘-拟牛顿定位算法。在RSSI测距方面,首先通过信标节点的自校正定位求得误差校正系数,将该误差校正系数运用到求未知节点到信标节点的距离当中。在定位计算方面,该算法运用最小二乘法估计简单和拟牛顿法收敛速度快的特点,将最小二乘法计算出来的初值,用拟牛顿法对未知节点坐标进行迭代求精。通过仿真实验表明,本文提出的定位算法定位精度高,与传统的最小二乘法相比提高了近36%的精度。  相似文献   

19.
Considering the situation that the least-squares (LS) method for system identification has poor robustness and the least absolute deviation (LAD) algorithm is hard to construct, an approximate least absolute deviation (ALAD) algorithm is proposed in this paper. The objective function of ALAD is constructed by introducing a deterministic function to approximate the absolute value function. Based on the function, the recursive equations for parameter identification are derived using Gauss-Newton iterative algorithm without any simplification. This algorithm has advantages of simple calculation and easy implementation, and it has second order convergence speed. Compared with the LS method, the new algorithm has better robustness when disorder and peak noises exist in the measured data. Simulation results show the efficiency of the proposed method.  相似文献   

20.
Dr. R. Vulanović 《Computing》1989,41(1-2):97-106
A numerical method for singularly perturbed quasilinear boundary value problems without turning points is proposed: the continuous problem is transformed by introducing a special new independent variable and then finite-difference schemes are applied. The first order convergence uniform in the perturbation parameter is proved in the discreteL 1-norm. The numerical results show the pointwise convergence, too.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号