首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper a new class of simplified low-cost analog artificial neural networks with on chip adaptive learning algorithms are proposed for solving linear systems of algebraic equations in real time. The proposed learning algorithms for linear least squares (LS), total least squares (TLS) and data least squares (DLS) problems can be considered as modifications and extensions of well known algorithms: the row-action projection-Kaczmarz algorithm and/or the LMS (Adaline) Widrow-Hoff algorithms. The algorithms can be applied to any problem which can be formulated as a linear regression problem. The correctness and high performance of the proposed neural networks are illustrated by extensive computer simulation results.  相似文献   

2.
A least squares based training algorithm for feedforward neural networks is presented. By decomposing each neuron of the network into a linear part and a nonlinear part, the learning error can then be minimized on each neuron by applying the least squares method to solve the linear part of the neuron. In all the problems investigated, the proposed algorithm is capable of achieving the required error level in one training iteration. Comparing to the conventional backpropagation algorithm and other fast training algorithms, the proposed training algorithm provides a major breakthrough in speeding up the training process.  相似文献   

3.
Recursive least squares (RLS)-based algorithms are a class of fast online training algorithms for feedforward multilayered neural networks (FMNNs). Though the standard RLS algorithm has an implicit weight decay term in its energy function, the weight decay effect decreases linearly as the number of learning epochs increases, thus rendering a diminishing weight decay effect as training progresses. In this paper, we derive two modified RLS algorithms to tackle this problem. In the first algorithm, namely, the true weight decay RLS (TWDRLS) algorithm, we consider a modified energy function whereby the weight decay effect remains constant, irrespective of the number of learning epochs. The second version, the input perturbation RLS (IPRLS) algorithm, is derived by requiring robustness in its prediction performance to input perturbations. Simulation results show that both algorithms improve the generalization capability of the trained network.  相似文献   

4.
A new least squares solution for obtaining asymptotically unbiased and consistent estimates of unknown parameters in noisy linear systems is presented. The proposed algorithms are in many ways more advantageous than generalized least squares algorithm. Extensions to on-line and multivariable problems can be easily implemented. Examples are given to illustrate the performance of these new algorithms.  相似文献   

5.
This note presents some new time update formulas for certain types of lattice algorithms used in autoregressive modeling of stationary time series. The new formulas enable the computation of the autoregressive coefficients in a number of operations per time step proportional to the model order.  相似文献   

6.
J.  L.M. 《Digital Signal Processing》2006,16(6):735-745
The weighted least squares (WLS) algorithm has proven useful for modern positron emission tomography (PET) scanners to approach reconstructions with non-Poisson precorrected measurement data. In this paper, we propose a new time recursive sequential WLS algorithm whose derivation uses the time-varying property of data acquisition of PET scanning. It ties close relationship with the time-varying Kalman filtering and can be extended appropriately to an iteration fashion as the absence of proper a priori initializations. The performance of sequential WLS is evaluated experimentally. The results show its fast convergence over both the multiplicative and coordinate-based iterative WLS methods. It also produces relative uniform estimate variances that makes it more suitable for routine applications.  相似文献   

7.
An algorithm for determining the optimal initial weights of feedforward neural networks based on linear algebraic methods is presented. With the optimal initial weights, the initial network error is enormous smaller. In one of the examples presented in this letter, the achieved accuracy is sufficient for direct application. If further smaller network error is required, the networks can be trained using backpropagation algorithm.  相似文献   

8.
Computational Visual Media - Recent years have witnessed the emergence of image decomposition techniques which effectively separate an image into a piecewise smooth base layer and several residual...  相似文献   

9.
This paper focuses on the identification problem of multivariable controlled autoregressive autoregressive (CARAR-like) systems. The corresponding identification model contains a parameter vector and a parameter matrix, and thus the conventional least squares methods cannot be applied to directly estimate the parameters of the systems. By using the hierarchical identification principle, this paper presents a hierarchical generalized least squares algorithm and a filtering based hierarchical least squares algorithm for the multivariable CARAR-like systems. The simulation results show that the two hierarchical least squares algorithms are effective.  相似文献   

10.
Quaternionic least squares (QLS) problem is one method of solving overdetermined sets of quaternion linear equations AXB that is appropriate when there is error in the matrix B. In this paper, by means of complex representation of a quaternion matrix, we introduce a concept of norm of quaternion matrices, discuss singular values and generalized inverses of a quaternion matrix, study the QLS problem and derive two algebraic methods for finding solutions of the QLS problem in quaternionic quantum theory.  相似文献   

11.
C. Corradi  L. Stefanini 《Calcolo》1978,15(3):317-330
Nonlinear least squares problems frequently arise in which the fitting function can be written as a linear combination of functions involving further parameters in a nonlinear manner. This paper outlines an efficient implementation of an iterative procedure originally developed by Golub and Pereyra and successively modified by various authors, which takes advantage of the linear-nonlinear structure, and investigates its performances on various test problems as compared with the standard Gauss-Newton and Gauss-Newton-Marquardt schemes. A preliminary version of this note has been presented at the CNR-GNIM meeting held in Florence, september 1976.  相似文献   

12.
John B. Moore 《Automatica》1978,14(5):505-509
In this paper almost sure convergence results are derived for least squares identification algorithms. The convergence conditions expressed in terms of the measurable signal model states derived for asymptotically stable signal models and possibly nonstationary processes are in essence the same as those previously given, but are derived more directly. Strong consistency results are derived for the case of signal models with unstable modes and exponential rates of convergence to the unstable modes are demonstrated. These latter convergence results are stronger than those earlier ones in which weak consistency conditions are given and there is also less restriction on the noise disturbances than in earlier theories. The derivations in the paper appeal to martingale convergence theorems and the Toeplitz lemma.  相似文献   

13.
14.
Orthogonal least squares learning algorithm for radial basisfunction networks   总被引:146,自引:0,他引:146  
The radial basis function network offers a viable alternative to the two-layer neural network in many applications of signal processing. A common learning algorithm for radial basis function networks is based on first choosing randomly some data points as radial basis function centers and then using singular-value decomposition to solve for the weights of the network. Such a procedure has several drawbacks, and, in particular, an arbitrary selection of centers is clearly unsatisfactory. The authors propose an alternative learning procedure based on the orthogonal least-squares method. The procedure chooses radial basis function centers one by one in a rational way until an adequate network has been constructed. In the algorithm, each selected center maximizes the increment to the explained variance or energy of the desired output and does not suffer numerical ill-conditioning problems. The orthogonal least-squares learning strategy provides a simple and efficient means for fitting radial basis function networks. This is illustrated using examples taken from two different signal processing applications.  相似文献   

15.
The problem of designing a classifier when prior probabilities are not known or are not representative of the underlying data distribution is discussed in this paper. Traditional learning approaches based on the assumption that class priors are stationary lead to sub-optimal solutions if there is a mismatch between training and future (real) priors. To protect against this uncertainty, a minimax approach may be desirable. We address the problem of designing a neural-based minimax classifier and propose two different algorithms: a learning rate scaling algorithm and a gradient-based algorithm. Experimental results show that both succeed in finding the minimax solution and it is also pointed out the differences between common approaches to cope with this uncertainty in priors and the minimax classifier.  相似文献   

16.
In training the weights of a feedforward neural network, it is well known that the global extended Kalman filter (GEKF) algorithm has much better performance than the popular gradient descent with error backpropagation in terms of convergence and quality of solution. However, the GEKF is very computationally intensive, which has led to the development of efficient algorithms such as the multiple extended Kalman algorithm (MEKA) and the decoupled extended Kalman filter algorithm (DEKF), that are based on dimensional reduction and/or partitioning of the global problem. In this paper we present a new training algorithm, called local linearized least squares (LLLS), that is based on viewing the local system identification subproblems at the neuron level as recursive linearized least squares problems. The objective function of the least squares problems for each neuron is the sum of the squares of the linearized backpropagated error signals. The new algorithm is shown to give better convergence results for three benchmark problems in comparison to MEKA, and in comparison to DEKF for highly coupled applications. The performance of the LLLS algorithm approaches that of the GEKF algorithm in the experiments.  相似文献   

17.
Extended least squares based algorithm for training feedforward networks.   总被引:2,自引:0,他引:2  
An extended least squares-based algorithm for feedforward networks is proposed. The weights connecting the last hidden and output layers are first evaluated by least squares algorithm. The weights between input and hidden layers are then evaluated using the modified gradient descent algorithms. This arrangement eliminates the stalling problem experienced by the pure least squares type algorithms; however, still maintains the characteristic of fast convergence. In the investigated problems, the total number of FLOPS required for the networks to converge using the proposed training algorithm are only 0.221%-16.0% of that using the Levenberg-Marquardt algorithm. The number of floating point operations per iteration of the proposed algorithm are only 1.517-3.521 times of that of the standard backpropagation algorithm.  相似文献   

18.
In this paper, we consider the problem of noncausal identification of nonstationary, linear stochastic systems, i.e., identification based on prerecorded input/output data. We show how several competing weighted (windowed) least squares parameter smoothers, differing in memory settings, can be combined together to yield a better and more reliable smoothing algorithm. The resulting parallel estimation scheme automatically adjusts its smoothing bandwidth to the unknown, and possibly time-varying, rate of nonstationarity of the identified system. We optimize the window shape for a certain class of parameter variations and we derive computationally attractive recursive smoothing algorithms for such an optimized case.  相似文献   

19.
针对目前多传感器数据融合过程中传感器对某一状态量测量时精度较低的问题,提出了基于最小二乘原理的多传感器加权数据融合算法.该方法利用最小二乘原理和方差的遗忘信息,通过均方误差比较,计算出各个传感器的权重之后进行加权融合.该算法既考虑了历时信息的作用,又考虑了环境噪声和新采样值的影响,增强了对环境监测的敏感性.相比同类融合方法,该方法具有较高的精度,最后仿真结果也直观地说明了该方法的有效性.  相似文献   

20.
《国际计算机数学杂志》2012,89(11):2552-2567
This paper is concerned with minimal norm least squares solution to general linear matrix equations including the well-known Lyapunov matrix equation and Sylvester matrix equation as special cases. Two iterative algorithms are proposed to solve this problem. The first method is based on the gradient search principle for solving optimization problem and the second one can be regarded as its dual form. For both algorithms, necessary and sufficient conditions guaranteeing the convergence of the algorithms are presented. The optimal step sizes such that the convergence rates of the algorithms are maximized are established in terms of the singular values of some coefficient matrix. It is believed that the proposed methods can perform important functions in many analysis and design problems in systems theory.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号