首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Convergence behavior of affine projection algorithms   总被引:8,自引:0,他引:8  
A class of equivalent algorithms that accelerate the convergence of the normalized LMS (NLMS) algorithm, especially for colored inputs, has previously been discovered independently. The affine projection algorithm (APA) is the earliest and most popular algorithm in this class that inherits its name. The usual APA algorithms update weight estimates on the basis of multiple, unit delayed, input signal vectors. We analyze the convergence behavior of the generalized APA class of algorithms (allowing for arbitrary delay between input vectors) using a simple model for the input signal vectors. Conditions for convergence of the APA class are derived. It is shown that the convergence rate is exponential and that it improves as the number of input signal vectors used for adaptation is increased. However, the rate of improvement in performance (time-to-steady-state) diminishes as the number of input signal vectors increases. For a given convergence rate, APA algorithms are shown to exhibit less misadjustment (steady-state error) than NLMS. Simulation results are provided to corroborate the analytical results  相似文献   

2.
It is demonstrated that the normalized least mean square (NLMS) algorithm can be viewed as a modification of the widely used LMS algorithm. The NLMS is shown to have an important advantage over the LMS, which is that its convergence is independent of environmental changes. In addition, the authors present a comprehensive study of the first and second-order behavior in the NLMS algorithm. They show that the NLMS algorithm exhibits significant improvement over the LMS algorithm in convergence rate, while its steady-state performance is considerably worse  相似文献   

3.
The normalized least mean square (NLMS) algorithm is an important variant of the classical LMS algorithm for adaptive linear filtering. It possesses many advantages over the LMS algorithm, including having a faster convergence and providing for an automatic time-varying choice of the LMS stepsize parameter that affects the stability, steady-state mean square error (MSE), and convergence speed of the algorithm. An auxiliary fixed step-size that is often introduced in the NLMS algorithm has the advantage that its stability region (step-size range for algorithm stability) is independent of the signal statistics. In this paper, we generalize the NLMS algorithm by deriving a class of nonlinear normalized LMS-type (NLMS-type) algorithms that are applicable to a wide variety of nonlinear filter structures. We obtain a general nonlinear NLMS-type algorithm by choosing an optimal time-varying step-size that minimizes the next-step MSE at each iteration of the general nonlinear LMS-type algorithm. As in the linear case, we introduce a dimensionless auxiliary step-size whose stability range is independent of the signal statistics. The stability region could therefore be determined empirically for any given nonlinear filter type. We present computer simulations of these algorithms for two specific nonlinear filter structures: Volterra filters and the previously proposed class of Myriad filters. These simulations indicate that the NLMS-type algorithms, in general, converge faster than their LMS-type counterparts  相似文献   

4.
Normalized data nonlinearities for LMS adaptation   总被引:12,自引:0,他引:12  
Properly designed nonlinearly-modified LMS algorithms, in which various quantities in the stochastic gradient estimate are operated upon by memoryless nonlinearities, have been shown to perform better than the LMS algorithm in system identification-type problems. The authors investigate one such algorithm given by Wk+l=Wk+μ(dk-Wkt Xk)Xkf(Xk) in which the function f(Xk) is a scalar function of the sum of the squares of the N elements of the input data vector Xk. This form of algorithm generalizes the so-called normalized LMS (NLMS) algorithm. They evaluate the expected behavior of this nonlinear algorithm for both independent input vectors and correlated Gaussian input vectors assuming the system identification model. By comparing the nonlinear algorithm's behavior with that of the LMS algorithm, they then provide a method of optimizing the form of the nonlinearity for the given input statistics. In the independent input case, they show that the optimum nonlinearity is a single-parameter version of the NLMS algorithm with an additional constant in the denominator and show that this algorithm achieves a lower excess mean-square error (MSE) than the LMS algorithm with an equivalent convergence rate. Additionally, they examine the optimum step size sequence for the optimum nonlinear algorithm and show that the resulting algorithm performs better and is less complex to implement than the optimum step size algorithm derived for another form of the NLMS algorithm. Simulations verify the theory and the predicted performance improvements of the optimum normalized data nonlinearity algorithm  相似文献   

5.
LMS和归一化LMS算法收敛门限与步长的确定   总被引:4,自引:0,他引:4  
从LMS算法失调量的准确表达式出发,根据输入信号特征值分布重新研究了LMS,归一化LMS(Normalized LMS,NLMS)算法收敛的必要条件,推导出LMS和NLMS 算法收敛的步长门限,并分析了输入信号特征值分布、滤波器阶数对算法收敛步长门限的影响,推导出满足性能失调下步长的自适应计算公式,减小了应用 LMS,NLMS算法时步长选取的盲目性,与已有的算法相比,具有计算简单、实用、自适应性能强,同时可获得满意失调量的特点,计算机模拟结果表明该方法的正确性。  相似文献   

6.
张玉梅  吴晓军  白树林 《电子学报》2014,42(9):1801-1806
为克服最小二乘法或归一化最小二乘法在二阶Volterra建模时参数选择不当引起的问题,在最小二乘法基础上,应用一种基于后验误差假设的可变收敛因子技术,构建了一种基于Davidon-Fletcher-Powell算法的二阶Volterra模型(DFPSOVF).给出参数估计中自相关逆矩阵估计的递归更新公式,并对其正定性、有界性和τ(n)的作用进行了研究.将DFPSOVF模型应用于Rössler混沌序列的单步预测,仿真结果表明其能够保证算法的稳定性和收敛性,不存在最小二乘法和归一化最小二乘法的发散问题.  相似文献   

7.
为了提高VoIP的通信质量,减少回声干扰,对LMS算法、NLMS算法进行阐述,基于NLMS提出了一种运算量小并且提高收敛性能的改进的自适应滤波算法。通过在Matlab下的仿真研究和对误差曲线的分析,证明了该改进算法的收敛速度快,均方误差小。用改进的算法对语音回声信号进行消除,仿真得到消除回声后的信号效果明显,为IP电话中回声消除的自适应滤波问题提供了一个较好的算法。  相似文献   

8.
A new robust computationally efficient variable step-size LMS algorithm is proposed and it is applied for secondary path (SP) identification of feedforward and feedback active noise control (ANC) systems. The proposed variable step-size Griffiths’ LMS (VGLMS) algorithm not only uses a step-size, but also the gradient itself, based on the cross-correlation between input and the desired signal. This makes the algorithm robust to both stationary and non-stationary observation noise and the additional computational load involved for this is marginal. Further, in terms of convergence speed and error, it is better than those by the Normalized LMS (NLMS) and the Zhang’s method (Zhang in EURASIP J. Adv. Signal Process. 2008(529480):1–9, 2008). The convergence rate of the feedforward and feedback ANC systems with the VGLMS algorithm for SP identification is faster (by a factor of 2 and 3, respectively) compared with that using NLMS algorithm. For feedforward ANC, its convergence rate is faster (3 times) compared with Akhtar’s algorithm (Akhtar in IEEE Trans Audio Speech Lang Process 14(2), 2006). Also, for higher main path lengths compared with SP, the proposed algorithm is computationally efficient compared with Akhtar’s algorithm.  相似文献   

9.
一种改进的NLMS算法在声回波抵消中的应用   总被引:2,自引:0,他引:2  
收敛速度和残余均方误差是衡量最小均方算法性能的重要指标。在声回波抵消算法中,为了寻求收敛速度快和计算量小的自适应算法,在归一化最小均方误差算法基础上,把当前时刻以前的误差引入归一化收敛因子中得到一种新算法,可以减小信号样本波动对权重带来的影响。该算法比传统的归一化最小均方算法收敛性能更好,稳态失调也比其小。计算机仿真结果表明,新算法在自适应回波抵消中的综合性能要优于传统的归一化最小均方误差算法。  相似文献   

10.
Architectural synthesis of low-power computational engines (hardware accelerators) for a subband-based adaptive filtering algorithm is presented. The full-band least mean square (LMS) adaptive filtering algorithm, widely used in various applications, is confronted by two problems, viz., slow convergence when the input correlation matrix is ill-conditioned, and increased computational complexity for applications involving use of large adaptive filter orders. Both of these problems can be overcome by the use of a subband-based normalized LMS (NLMS) adaptive filtering algorithm. Since this algorithm is not amenable to pipelining, delayed coefficient adaptation in the NLMS updation is used, which provides the required delays for pipelining. However, the convergence speed of this subband-based delayed NLMS (DNLMS) algorithm degrades with increase in the adaptation delay. We first present a pipelined subband DNLMS adaptive filtering architecture with minimal adaptation delay for any given sampling rate. The architecture is synthesized by using a number of function preserving transformations on the signal flow graph (SFG) representation of the subband DNLMS algorithm. With the use of carry-save arithmetic, the pipelined architecture can support high sampling rates limited only by the delay of two full adders and a 2-to-1 multiplexer. We then extend this synthesis methodology to synthesize a pipelined subband DNLMS architecture whose power dissipation meets a specified budget. This low-power architecture exploits the parallelism in the subband DNLMS algorithm to meet the required computational throughput. The architecture exhibits a novel tradeoff between algorithmic performance (convergence speed) and power dissipation. Finally, we incorporate configurability for filter order, sample period, power reduction factor, number of subbands and decimation/interpolation factor in the low-power architecture, thus resulting in a low-power subband computational engine for adaptive filtering.  相似文献   

11.
The least-mean-square-type (LMS-type) algorithms are known as simple and effective adaptation algorithms. However, the LMS-type algorithms have a trade-off between the convergence rate and steady-state performance. In this paper, we investigate a new variable step-size approach to achieve fast convergence rate and low steady-state misadjustment. By approximating the optimal step-size that minimizes the mean-square deviation, we derive variable step-sizes for both the time-domain normalized LMS (NLMS) algorithm and the transform-domain LMS (TDLMS) algorithm. The proposed variable step-sizes are simple quotient forms of the filtered versions of the quadratic error and very effective for the NLMS and TDLMS algorithms. The computer simulations are demonstrated in the framework of adaptive system modeling. Superior performance is obtained compared to the existing popular variable step-size approaches of the NLMS and TDLMS algorithms.  相似文献   

12.
通过分析NLMS[1]和VSSLMS[7]两种算法控制时变步长的思想及这两种算法的优缺点,文章提出了一种改进的归一化变步长算法,它同时使用输入信号积累和瞬时误差来控制步长更新.该算法的优越性在于收敛速度快,尤其在系统跳变时也能快速收敛,可很好地应用于自适应预测系统中.理论分析与计算机仿真结果都表明该算法收敛速度快、失调量小、稳定性好, 且在低信噪比的环境中比其他同类算法有更好的性能.  相似文献   

13.
A Modular Analog NLMS Structure for Adaptive Filtering   总被引:1,自引:0,他引:1  
This paper proposes a modular Analog Adaptive filter (AAF) algorithm, in which the coefficient adaptation is carried out by using a time varying step size analog normalized LMS (NLMS) algorithm, which is implemented as an external analog structure. The proposed time varying step size is estimated by using the first element of the crosscorrelation vector between the output error and reference signal, and the first element of the crosscorrelation vector between the output error and the adaptive filter output signal, respectively. Proposed algorithm reduces distortion when additive noise power increases or DC offsets are present, without significatively decreasing the convergence rate nor increasing the complexity of the conventional NLMS algorithms. Simulation results show that proposed algorithm improves the performance of AAF when DC offsets are present. The proposed VLSI structure for the time varying step size normalized NLMS algorithm has, potentially, a very small size and faster convergence rates than its digital counterparts. It is suitable for general purpose applications or oriented filtering solution such as echo cancellation and equalization in cellular telephony in which high performance, low power consumption, fast convergence rates and small size adaptive digital filters (ADF) are required. The convergence performance of analog adaptive filters using integrators like first order low pass filter is analyzed.  相似文献   

14.
A set of algorithms linking NLMS and block RLS algorithms   总被引:1,自引:0,他引:1  
This paper describes a set of block processing algorithms which contains as extremal cases the normalized least mean squares (NLMS) and the block recursive least squares (BRLS) algorithms. All these algorithms use small block lengths, thus allowing easy implementation and small input-output delay. It is shown that these algorithms require a lower number of arithmetic operations than the classical least mean squares (LMS) algorithm, while converging much faster. A precise evaluation of the arithmetic complexity is provided, and the adaptive behavior of the algorithm is analyzed. Simulations illustrate that the tracking characteristics of the new algorithm are also improved compared to those of the NLMS algorithm. The conclusions of the theoretical analysis are checked by simulations, illustrating that, even in the case where noise is added to the reference signal, the proposed algorithm allows altogether a faster convergence and a lower residual error than the NLMS algorithm. Finally, a sample-by-sample version of this algorithm is outlined, which is the link between the NLMS and recursive least squares (RLS) algorithms  相似文献   

15.
张炳婷  赵建平  陈丽  盛艳梅 《通信技术》2015,48(9):1010-1014
研究了最小均方误差(LMS)算法、归一化的最小均方(NLMS)算法及变步长NLMS算法在自适应噪声干扰抵消器中的应用,针对目前这些算法在噪声对消器应用中的缺点,将约束稳定性最小均方(CS-LMS)算法应用到噪声处理中,并进一步结合变步长的思想提出来一种新的变步长CS-LMS算法。通过MATLAB进行仿真分析,结果证实提出的算法与其他算法相比,能很好地滤除掉噪声从而得到期望信号,明显的降低了稳态误差,并拥有好的收敛速度。  相似文献   

16.
一种改进的变步长ELMS算法   总被引:2,自引:0,他引:2  
吕振肃  黄石 《电子与信息学报》2005,27(10):1524-1526
在简单讨论基本最小均方(LMS)算法的基础上,引入了扩展的最小均方(ELMS)算法,并分析说明了该算法能达到更小的稳态MSE。改进的变步长ELMS算法是在对有用信号的预测中采用了自适应为归一化的的最小均方(NLMS)预测估计器,步长的迭代中引入遗忘因子i,利用其与误差信号的加权和来产生新的步长参与迭代。理论分析与计算机仿真结果表明,该算法有较好的收敛性能和较小的稳态失调。  相似文献   

17.
Certain conditions require a delay in the coefficient update of the least mean square (LMS) and normalized least mean square (NLMS) algorithms. This paper presents an in-depth analysis of these modificated versions for the important case of spherically invariant random processes (SIRPs), which are known as an excellent model for speech signals. Some derived bounds and the predicted dynamic behavior of the algorithms are found to correspond very well to simulation results and a real time implementation on a fixed-point signal processor. A modification of the algorithm is proposed to assure the well known properties of the LMS and NLMS algorithms  相似文献   

18.
Noting that a fine analysis is presented for the convergence and misadjustment of the normalized least-mean-square (NLMS) algorithm in the paper by Tarrab and Feuer (see ibid., vol.3, no.4, p.468091, July 1988), the commenter claims that the results and comparisons with the LMS algorithm are not in a form that readily enables the reader to draw practical conclusions. He points out that plotting mean-square error on a linear, instead of logarithmic (dB), scale hides the important detail of the error as it converges to its minimum value, which is exactly the region where the practical engineer requires detailed knowledge to assess performance. Moreover, in the comparison of the NLMS and LMS algorithm convergence rate and misadjustment, the practitioner wants to know how fast the algorithm will converge when the misadjustment is constrained to a specified value  相似文献   

19.
It is shown that two algorithms obtained by simplifying a Kalman filter considered for a second-order Markov model are H suboptimal. Similar to least mean squares (LMS) and normalised LMS (NLMS) algorithms, these second order algorithms can be thought of as approximate solutions to stochastic or deterministic least squares minimisation. It is proved that second-order LMS and NLMS are exact solutions causing the maximum energy gain from the disturbances to the predicted and filtered errors to be less than one, respectively. These algorithms are implemented in two steps. Operation of the first step is like conventional LMS/NLMS algorithms and the second step consists of the estimation of the weight increment vector and prediction of weights for the next iteration. This step applies simple smoothing on the increment of the estimated weights to estimate the speed of the weights. Also they are cost-effective, robust and attractive for improving the tracking performance of smoothly time-varying models  相似文献   

20.
外辐射源雷达抗直达波干扰技术研究   总被引:2,自引:0,他引:2  
外辐射源雷达系统中,直达波干扰严重影响了雷达对目标的探测性能.文中针对直达波干扰问题,通过对LMS、NLMS、改进的NLMS算法的收敛速度、时变系统跟踪能力、失调量等的分析,将改进的归一化LMS(NLMS)自适应滤波算法应用于直达波干扰抑制,取得了较好的处理效果,其对消得益可达40 dB;分析了滤波器阶数、参数选择对对消性能和信噪比损失的影响,给出了典型参数值.最后,真实数据的处理结果验证了该方法的有效性.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号