首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A variable step size LMS algorithm   总被引:14,自引:0,他引:14  
A least-mean-square (LMS) adaptive filter with a variable step size is introduced. The step size increases or decreases as the mean-square error increases or decreases, allowing the adaptive filter to track changes in the system as well as produce a small steady state error. The convergence and steady-state behavior of the algorithm are analyzed. The results reduce to well-known results when specialized to the constant-step-size case. Simulation results are presented to support the analysis and to compare the performance of the algorithm with the usual LMS algorithm and another variable-step-size algorithm. They show that its performance compares favorably with these existing algorithms  相似文献   

2.
The convergence properties of an adaptive linear mean-square estimator that uses a modified LMS algorithm are established for generally dependent processes. Bounds on the mean-square error of the estimates of the filter coefficients and on the excess error of the estimate of the signal are derived for input processes which are either strong mixing or asymptotically uncorrelated. It is shown that the mean-square deviation is bounded by a constant multiple of the adaptation step size and that the same holds for the excess error of the signal estimation. The present findings extend earlier results in the literature obtained for independent and M-dependent input data  相似文献   

3.
The paper provides a rigorous analysis of the behavior of adaptive filtering algorithms when the covariance matrix of the filter input is singular. The analysis is done in the context of adaptive plant identification. The considered algorithms are LMS, RLS, sign (SA), and signed regressor (SRA) algorithms. Both the signal and weight behavior of the algorithms are considered. The signal behavior is evaluated in terms of the moments of the excess output error of the filter. The weight behavior is evaluated in terms of the moments of the filter weight misalignment vector. It is found that the RLS and SRA diverge when the input covariance matrix is singular. The steady-state signal behavior of the LMS and SA can be made arbitrarily fine by using sufficiently small step sizes of the algorithms. Indeed, the long-term average of the mean square excess error of the LMS is proportional to the algorithm step size. The long-term average of the mean absolute excess error of the SA is proportional to the square root of the algorithm step size. On the other hand, the steady-state weight behavior of both the LMS and SA have biases that depend on the weight initialization. The analytical results of the paper are supported by simulations  相似文献   

4.
The leaky LMS adaptive filter can be implemented either directly or by adding random white noise to the input signal of the LMS adaptive filter. In this correspondence, we analyze and compare the mean-square performances of these two adaptive filter implementations for system identification tasks with zero mean i.i.d. input signals. Our results indicate that the performance of the direct implementation is superior to that of the random noise implementation in all respects. However, for small leakage factors, these performance differences are negligible  相似文献   

5.
This paper presents a statistical analysis of the least mean square (LMS) algorithm with a zero-memory scaled error function nonlinearity following the adaptive filter output. This structure models saturation effects in active noise and active vibration control systems when the acoustic transducers are driven by large amplitude signals. The problem is first defined as a nonlinear signal estimation problem and the mean-square error (MSE) performance surface is studied. Analytical expressions are obtained for the optimum weight vector and the minimum achievable MSE as functions of the saturation. These results are useful for adaptive algorithm design and evaluation. The LMS algorithm behavior with saturation is analyzed for Gaussian inputs and slow adaptation. Deterministic nonlinear recursions are obtained for the time-varying mean weight and MSE behavior. Simplified results are derived for white inputs and small step sizes. Monte Carlo simulations display excellent agreement with the theoretical predictions, even for relatively large step sizes. The new analytical results accurately predict the effect of saturation on the LMS adaptive filter behavior  相似文献   

6.
Normalized data nonlinearities for LMS adaptation   总被引:12,自引:0,他引:12  
Properly designed nonlinearly-modified LMS algorithms, in which various quantities in the stochastic gradient estimate are operated upon by memoryless nonlinearities, have been shown to perform better than the LMS algorithm in system identification-type problems. The authors investigate one such algorithm given by Wk+l=Wk+μ(dk-Wkt Xk)Xkf(Xk) in which the function f(Xk) is a scalar function of the sum of the squares of the N elements of the input data vector Xk. This form of algorithm generalizes the so-called normalized LMS (NLMS) algorithm. They evaluate the expected behavior of this nonlinear algorithm for both independent input vectors and correlated Gaussian input vectors assuming the system identification model. By comparing the nonlinear algorithm's behavior with that of the LMS algorithm, they then provide a method of optimizing the form of the nonlinearity for the given input statistics. In the independent input case, they show that the optimum nonlinearity is a single-parameter version of the NLMS algorithm with an additional constant in the denominator and show that this algorithm achieves a lower excess mean-square error (MSE) than the LMS algorithm with an equivalent convergence rate. Additionally, they examine the optimum step size sequence for the optimum nonlinear algorithm and show that the resulting algorithm performs better and is less complex to implement than the optimum step size algorithm derived for another form of the NLMS algorithm. Simulations verify the theory and the predicted performance improvements of the optimum normalized data nonlinearity algorithm  相似文献   

7.
Stochastic gradient adaptation under general error criteria   总被引:2,自引:0,他引:2  
Examines a family of adaptive filter algorithms of the form Wk+1=Wk+μf(dk-Wkt Xk)Xk in which f(·) is a memoryless odd-symmetric nonlinearity acting upon the error. Such algorithms are a generalization of the least-mean-square (LMS) adaptive filtering algorithm for even-symmetric error criteria. For this algorithm family, the authors derive general expressions for the mean and mean-square convergence of the filter coefficients For both arbitrary stochastic input data and Gaussian input data. They then provide methods for optimizing the nonlinearity to minimize the algorithm misadjustment for a given convergence rate. Using the calculus of variations, it is shown that the optimum nonlinearity to minimize misadjustment near convergence under slow adaptation conditions is independent of the statistics of the input data and can be expressed as -p'(x)/p(x), where p(x) is the probability density function of the uncorrelated plant noise. For faster adaptation under the white Gaussian input and noise assumptions, the nonlinearity is shown to be x/{1+μλx2k 2}, where λ is the input signal power and σk2 is the conditional error power. Thus, the optimum stochastic gradient error criterion for Gaussian noise is not mean-square. It is shown that the equations governing the convergence of the nonlinear algorithm are exactly those which describe the behavior of the optimum scalar data nonlinear adaptive algorithm for white Gaussian input. Simulations verify the results for a host of noise interferences and indicate the improvement using non-mean-square error criteria  相似文献   

8.
It is shown that the normalized least mean square (NLMS) algorithm is a potentially faster converging algorithm compared to the LMS algorithm where the design of the adaptive filter is based on the usually quite limited knowledge of its input signal statistics. A very simple model for the input signal vectors that greatly simplifies analysis of the convergence behavior of the LMS and NLMS algorithms is proposed. Using this model, answers can be obtained to questions for which no answers are currently available using other (perhaps more realistic) models. Examples are given to illustrate that even quantitatively, the answers obtained can be good approximations. It is emphasized that the convergence of the NLMS algorithm can be speeded up significantly by employing a time-varying step size. The optimal step-size sequence can be specified a priori for the case of a white input signal with arbitrary distribution  相似文献   

9.
An efficient scheme is presented for implementing the LMS-based transversal adaptive filter in block floating-point (BFP) format, which permits processing of data over a wide dynamic range, at temporal and hardware complexities significantly less than that of a floating-point processor. Appropriate BFP formats for both the data and the filter coefficients are adopted, taking care so that they remain invariant to interblock transition and weight updating operation, respectively. Care is also taken to prevent overflow during filtering, as well as weight updating processes jointly, by using a dynamic scaling of the data and a slightly reduced range for the step size, with the latter having only marginal effect on convergence speed. Extensions of the proposed scheme to the sign-sign LMS and the signed regressor LMS algorithms are taken up next, in order to reduce the processing time further. Finally, a roundoff error analysis of the proposed scheme under finite precision is carried out. It is shown that in the steady state, the quantization noise component in the output mean-square error depends on the step size both linearly and inversely. An optimum step size that minimizes this error is also found out.  相似文献   

10.
Nonlinear effects in LMS adaptive equalizers   总被引:1,自引:0,他引:1  
An adaptive transversal equalizer based on the least-mean-square (LMS) algorithm, operating in an environment with a temporally correlated interference, can exhibit better steady-state mean-square-error (MSE) performance than the corresponding Wiener filter. This phenomenon is a result of the nonlinear nature of the LMS algorithm and is obscured by traditional analysis approaches that utilize the independence assumption (current filter weight vector assumed to be statistically independent of the current data vector). To analyze this equalizer problem, we use a transfer function approach to develop approximate analytical expressions of the LMS MSE for sinusoidal and autoregressive interference processes. We demonstrate that the degree to which LMS may outperform the corresponding Wiener filter is dependent on system parameters such as signal-to-noise ratio (SNR), signal-to-interference ratio (SIR), equalizer length, and the step-size parameter  相似文献   

11.
This paper studies the transient behavior of an adaptive near-far resistant receiver for direct-sequence (DS) code-division multiple-access (CDMA) known as the minimum mean-squared error (MMSE) receiver. This receiver structure is known to be near-far resistant and yet does not require the large amounts of side information that are typically required for other near-far resistant receivers. In fact, this receiver only requires code timing on the one desired signal. The MMSE receiver uses an adaptive filter which is operated in a manner similar to adaptive equalizers. Initially there is a training period where the filter locks onto the signal that is sending a known training sequence. After training, the system can then switch to a decision-directed mode and send actual data. This work examines the length of the training period needed as a function of the number of interfering users and the severity of the near-far problem. A standard least mean-square (LMS) algorithm is used to adapt the filter and so the trade-off between convergence and excess mean-squared error is studied. It is found that in almost all cases a step size near 1.0/(total input power) gives the best speed of convergence with a reasonable excess mean-squared error. Also, it is shown that the MMSE receiver can tolerate a 30-40 dB near-far problem without excessively long convergence time  相似文献   

12.
Partial response (PR) equalization employing the linearly constrained least-mean-square (LCLMS) adaptive algorithm is widely used for jointly designing equalizer and PR target in recording channels. However, there is no literature on its convergence analysis. Further, existing analyses of the least-mean-square (LMS) algorithm assume that the input signals are jointly Gaussian, an assumption that is invalid for PR equalization with binary input. In this paper, we present a novel method to analyze the convergence of the LCLMS algorithm, without the Gaussian assumption. Our approach accommodates distinct step sizes for equalizer and PR target. It is shown that the step-size range required to guarantee stability of LCLMS with binary data is larger than that with Gaussian data. The analytical results are corroborated by extensive simulation studies.  相似文献   

13.
We present an analysis of the convergence of the frequency-domain LMS adaptive filter when the DFT is computed using the LMS steepest descent algorithm. In this case, the frequency-domain adaptive filter is implemented with a cascade of two sections, each updated using the LMS algorithm. The structure requires less computations compared to using the FFT and is modular suitable for VLSI implementations. Since the structure contains two adaptive algorithms updating in parallel, an analysis of the overall system convergence needs to consider the effect of the two adaptive algorithms on each other, in addition to their individual convergence. Analysis was based on the expected mean-square coefficient error for each of the two LMS adaptive algorithms, with some simplifying approximations for the second algorithm, to describe the convergence behavior of the overall system. Simulations were used to verify the results.  相似文献   

14.
This paper investigates the statistical behavior of the finite precision LMS adaptive filter in the identification of an unknown time-varying stochastic system. Nonlinear recursions are derived for the mean and mean-square behavior of the adaptive weights. Transient and tracking algorithm performance curves are generated from the recursions and shown to be in excellent agreement with Monte Carlo simulations. Our results demonstrate that linear models are inappropriate for analyzing the transient and the steady-state algorithm behavior. The performance curves indicate that the transient and tracking capabilities cannot be determined from perturbations about the infinite precision case. It is shown that the transient phase of the algorithm increases as the digital wordlength or the speed of variation of the unknown system decrease. Design examples illustrate how the theory can be used to select the algorithm step size and the number of bits in the quantizer  相似文献   

15.
Sign-sign LMS convergence with independent stochastic inputs   总被引:1,自引:0,他引:1  
The sign-sign adaptive least-mean-square (LMS) identifier filter is a computationally efficient variant of the LMS identifier filter. It involves the introduction of signum functions in the traditional LMS update term. Consideration is given to global convergence of parameter estimates offered by this algorithm, to a ball with radius proportional to the algorithm step size for white input sequences, specially from Gaussian and uniform distributions  相似文献   

16.
Analysis of the frequency domain adaptive filter   总被引:1,自引:0,他引:1  
The purpose of this note is to demonstrate significant analytical simplifications for studying the behavior of adaptive filtering in the frequency domain as opposed to studying the behavior of adaptive filtering in the time domain. A closed form expression, for the single complex weight in the frequency domain adaptive filter, is presented which allows significant statistical analysis to be performed. The mean-square error of the filter is evaluated as a function of the algorithm step size and the signal and noise powers.  相似文献   

17.
A method to determine a bound on the error performance of an adaptive filter due to roundoff effects is described. The method converts the analysis of a recursive algorithm into two much simpler sub-problems: convergence and momentary error. To apply the method, the input data has to be bounded. By classifying convergence into different categories according to their rates, it is observed that adaptive filtering algorithms that belong to a particular class share similar behavior due to roundoff error or other perturbation effects. The merit of the method is its simplicity and general applicability. Based on this method, a sufficient condition for the numerical stability of an adaptive filter is derived. Application of the method to the least mean square (LMS) algorithm is described. The analysis may also be generalized to include other perturbation effects  相似文献   

18.
This paper studies the statistical behavior of an affine combination of the outputs of two least mean-square (LMS) adaptive filters that simultaneously adapt using the same white Gaussian inputs. The purpose of the combination is to obtain an LMS adaptive filter with fast convergence and small steady-state mean-square deviation (MSD). The linear combination studied is a generalization of the convex combination, in which the combination factor lambda(n) is restricted to the interval (0,1). The viewpoint is taken that each of the two filters produces dependent estimates of the unknown channel. Thus, there exists a sequence of optimal affine combining coefficients which minimizes the mean-square error (MSE). First, the optimal unrealizable affine combiner is studied and provides the best possible performance for this class. Then two new schemes are proposed for practical applications. The mean-square performances are analyzed and validated by Monte Carlo simulations. With proper design, the two practical schemes yield an overall MSD that is usually less than the MSDs of either filter.  相似文献   

19.
For pt.I see ibid., vol.39, no. 3, p.583-94 (1991). The authors present a methodology for evaluating the tracking behavior of the least-mean square (LMS) algorithm for the nontrivial case of recovering a chirped sinusoid in additive noise. A complete closed-form analysis of the LMS tracking properties for a nonstationary inverse system modeling problem is also presented. The mean-square error (MSE) performance of the LMS algorithm is calculated as a function of the various system parameters. The misadjustment or residual of the adaptive filter output is the excess MSE as compared to the optimal filter for the problem. It is caused by three errors in the adaptive weight vector: the mean lag error between the (time-varying mean) weight and the time-varying optimal weight; the fluctuations of the lag error; and the noise misadjustment which is due to the output noise. These results are important because they represent a precise analysis of a nonstationary deterministic inverse modeling system problem with the input being a colored signal. The results are in agreement with the form of the upper bounds for the misadjustment provided by E. Eweda and O. Macchi (1985) for the deterministic nonstationarity  相似文献   

20.
Leaky LMS algorithm: MSE analysis for Gaussian data   总被引:3,自引:0,他引:3  
Despite the widespread usage of the leaky LMS algorithm, there has been no detailed study of its performance. This paper presents an analytical treatment of the mean-square error (MSE) performance for the leaky LMS adaptive algorithm for Gaussian input data. The common independence assumption regarding W(n) and X(n) is also used. Exact expressions that completely characterize the second moment of the coefficient vector and algorithm steady-state excess MSE are developed. Rigorous conditions for MSE convergence are also established. Analytical results are compared with simulation and are shown to agree well  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号