首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 312 毫秒
1.
研究多径传输条件下的时延估计问题。利用三阶累积量的一维切片作为高阶统计量,结合相关算法原理,提出一种新的时延估计算法。为提高时延估计精度,对相关数据进行了加权处理。该算法可有效抑制空间相关高斯噪声或对称分布噪声,得到非高斯信号准确的时延估计。算法具有计算量小,易于实现的优点。仿真结果表明了该算法的有效性。  相似文献   

2.
针对无源时差定位中的稀疏傅里叶变换时延估计算法在低信噪比条件下的抗噪性差和估值精度低等问题,提出了广义二次相关稀疏傅里叶时延估计算法。算法在对信号进行稀疏傅里叶变换的基础上,融合利用最小二乘拟合改进的广义二次相关算法,在对信号进行快速处理的同时抑制了噪声的干扰,使得时延估计算法的性能得到提高。仿真实验以及对实测数据的验证均表明改进算法具有较好的抗噪性以及时延估值精度。  相似文献   

3.
蔡睿妍  杨力  钱杨 《电子与信息学报》2020,42(11):2600-2606
针对复杂电磁环境下被动无线监测定位问题,该文提出广义相关熵的概念,推导了广义相关熵的性质,用以抑制阵列输出信号中的脉冲噪声。为了实现脉冲噪声环境下相干分布源中心DOA和扩散角的联合估计,提出基于广义相关熵的DOA估计新方法,并证明了该方法的有界性。为进一步提升算法的鲁棒性,推导了一种仅依赖阵列输出信号的自适应核函数。仿真结果表明,该算法能够实现脉冲噪声环境下相干分布源参数的联合估计,相比已有算法,具有更高的估计精度和鲁棒性。  相似文献   

4.
于玲  邱天爽 《通信学报》2015,36(1):218-223
提出了一种基于最小相关熵诱导距离(CIM)和Farrow结构的分数时延估计算法。该算法具有较强的抗脉冲噪声的能力,且所需观测数据较少,时延估计结果精度较高。理论分析和仿真实验表明,所提算法的估计精度和抗脉冲噪声性能均优于基于分数低阶统计量的LETDE算法。  相似文献   

5.
刘娜  汪涛  刘洛琨 《通信技术》2009,42(2):196-198
LS信道估计算法运算量小,实现简单,但估计值对噪声影响敏感,算法精度低。为降低噪声对精度的影响,文章提出一种新的基于DFT的LS信道估计改进算法。该算法利用并行PN序列信号特性,通过时延点信息获取算法得到多径信道的时延点信息,并在改进的信道估计算法中利用此信息,最大程度地抑制噪声。仿真结果表明,与现有的基于DFT的LS改进算法相比,本算法可进一步提高估计精度,在低信噪比条件下性能更好。  相似文献   

6.
针对同频带干扰及脉冲噪声并存的复杂电磁环境下现有时延估计算法性能退化的问题,该文引入双曲正切函数,提出一种改进的广义循环相关熵时延估计(HTGCCE)算法.首先指出脉冲噪声存在时广义循环相关熵算法的优势及性能退化的原因.然后利用双曲正切函数对其进行改进,以提高算法在脉冲噪声下的时延估计性能.仿真实验表明,提出的算法在脉...  相似文献   

7.
针对同频带干扰及脉冲噪声并存的复杂电磁环境下现有时延估计算法性能退化的问题,该文引入双曲正切函数,提出一种改进的广义循环相关熵时延估计(HTGCCE)算法.首先指出脉冲噪声存在时广义循环相关熵算法的优势及性能退化的原因.然后利用双曲正切函数对其进行改进,以提高算法在脉冲噪声下的时延估计性能.仿真实验表明,提出的算法在脉冲噪声的特征指数较小、信噪比较低及信干比较低时仍保持很好的时延估计性能.  相似文献   

8.
针对无源定位中时延估计的问题,在研究循环二次相关时延估计算法的基础上,结合希尔伯特差值时延估计算法,提出了一种新的时延估计方法。该方法运用希尔伯特差值法对循环二次相关峰值进行锐化处理,提高了时延估计精度,能在低信噪比条件下取得更好的时延估计性能。仿真验证了算法的有效性。  相似文献   

9.
在多站时差定位系统中使用基于LMS自适应滤波的互相关法进行时延估计时,若采用固定步长因子则会在收敛速度和稳态失调之间存在较大矛盾,从而影响时延估计精度。针对这一问题,文中提出了一种基于分段变步长LMS自适应滤波和希尔伯特差值的互相关时延估计优化算法。该方法首先采用分段变步长LMS自适应滤波对信号进行滤波处理,然后将滤波后的信号作互相关运算,最后通过希尔伯特差值法锐化相关函数的峰值,进一步提高时延估计精度。在相同条件下,文中模拟分析了不同算法的时延估计精度。实验结果表明,新的优化算法时延估计精度更高。在不同信噪比下,新方法相较传统时延估计方法精度提高了2.2%以上,具有良好的抗噪声性能。  相似文献   

10.
脉冲噪声环境下信号时延估计是信号处理领域十分活跃的研究方向。在对称α稳定分布噪声环境下,文章提出了一种基于非线性压缩核函数(NCCF)的短波衰落信号的时延估计算法。该算法采用非线性压缩核函数抑制了对称α稳定分布噪声的脉冲拖尾,同时较好保持了目标信号之间的互相关特性,从而保证了信号时延估计的准确性。同以往算法相比,本算法不仅降低了计算复杂度,而且对于衰落信号的时延估计具有更强的稳定性。理论证明和仿真结果均表明,该算法比GCC、FLOC、GCF和非线性变换等算法具有更好的时延估计有效性和准确度。   相似文献   

11.
针对相干分布式非圆信号参数估计算法在脉冲噪声环境下性能退化的问题,本文提出了广义复相关熵的概念,并给出了基于广义复相关熵的相干分布式非圆信号DOA(Direction of Arrival)估计方法。该算法首先由分布式信源模型获得入射信号的阵列输出信号,利用信号的非圆特性得到扩展阵列输出信号,再通过扩展阵列输出信号的广义复相关熵矩阵获取信号子空间,避开了传统二阶统计量算法在脉冲噪声下不适应的问题,最后由信号子空间旋转不变特性得到信号的中心波达方向角度。仿真实验结果表明,在Alpha稳定分布噪声条件下,与传统算法相比,本文所提算法具有更好的性能。   相似文献   

12.
This paper presents a robust time delay estimation algorithm for the α-stable noise based on correntropy. Many time delay estimation algorithms derived for impulsive stable noise are based on the theory of Fractional Lower Order Statistics (FLOS). Unlike previously introduced FLOS-type algorithms, the new algorithm is proposed to estimate the time delay by maximizing the generalized correlation function of two observed signals needing neither prior information nor estimation of the numerical value of the stable noise’s characteristic exponent. An interval for kernel selection is found for a wide range of characteristic exponent values of α-stable distribution. Simulations show the proposed algorithm offers superior performance over the existing covariation time delay estimation, least mean p-norm time delay estimation and achieves slightly improved performance than fractional lower order covariance time delay estimation at lower signal to noise ratio when the noise is highly impulsive  相似文献   

13.
14.
一种基于互相关序列的功放行为模型建立方法   总被引:1,自引:1,他引:0  
该文提出了一种基于功率放大器行为模型误差信号与输入信号互相关的行为模型时延估计算法,并在此基础上讨论了如何动态建立放大器行为模型的方法。采用这种方法建立非均匀时延的记忆多项式(Memory Polynomial, MP)行为模型。仿真结果显示该文提出的时延估计算法能够有效地寻找到输出信号中的主要时延分量,从而将传统MP模型的时延长度由6减少到2,同时模型NMSE只降低不到3dB,表明这种行为模型建立方法能够很好地在模型复杂度与模型精度之间寻求平衡。  相似文献   

15.
Neighbor discovery enables nodes in the networks to discover each other through simple information interaction,which was suitable for the new mobile low duty cycle sensor network (MLDC-WSN).However,because the nodes in MLDC-WSN can move randomly and sleep,the network topology was changed frequently,which results in that some nodes need a lot of energy and time to find their neighbors.How to realize fast neighbor discovery for all nodes in the network was a difficult problem in current research.To solve this problem,a new low-latency neighbor discovery algorithm based on multi-beacon messages was proposed.In this algorithm,the nodes were discovered by sending a short beacon message through their neighbor nodes,and by adjusting the time and frequency of beacon message sent,a lower neighbor discovery delay was obtained.Eventually,through quantitative analysis and simulation experiments,it is found that compared with existing algorithms,this algorithm can find all neighbor nodes in MLDC-WSN with less energy consumption,lower latency and greater probability.  相似文献   

16.
For cooperative received high-order modulation PCMA (paired carrier multiple access) signals,joint estima-tion of frequency offset and time delay was proposed.The sequence samples stored locally were used as auxiliary data.Estimation of frequency offset and time delay was calculated by optimizing the objective function,which was obtained from the cross-correlation computation of the auxiliary data and mixed signal,and the optimized process was accom-plished by utilizing two-dimensional search.By setting the threshold of joint estimation,the calculated amount was greatly reduced.Modified Cramer-Rao bound (MCRB) of interference frequency offset and time delay was derived,which provided theoretical basis for performance of the proposed algorithm.Simulation results show that the algorithm has similar performance with existing algorithms,but its complexity is reduced by two-thirds.  相似文献   

17.
Adaptive estimation of latency changes in evoked potentials   总被引:2,自引:0,他引:2  
Changes in latency of evoked potentials (EP) may indicate clinically and diagnostically important changes in the status of the nervous system. A low signal-to-noise ratio of the EP signal makes it difficult to estimate small, transient, time-varying changes in latency, or delays. Here, the authors present an adaptive algorithm that estimates small delay (latency change) values even when EP signal amplitudes are time-varying. When the delay is time invariant, the adaptive algorithm produces an unbiased estimate with delay estimation error less than half of the sampling interval. A lower estimation error variance is obtained when, in a pair of signals, the adaptive algorithm delays the signal with the higher SNR. The adaptive delay estimation algorithm was tested on intra-operative recordings of somatosensory EP, and analysis of those recordings reveals that the anesthetic etomidate produces a step change in the amplitude and latency of the EP signals  相似文献   

18.

As the problem of array mixing model of wideband signals cannot be solved by conventional blind source separation algorithms, an improved algorithm based on beamforming is proposed in this paper. First, the received signals are transformed into time–frequency domain, and the delays of source signals are estimated. Then, the received signals are compensated with the estimated delay in frequency domain. Finally, the desired signal is acquired by using Frost wideband beamforming algorithm. Due to adopting the new methods of single source point extraction and delay estimation, the complexity of the proposed algorithm is reduced. Pre-steering delay is used in frequency domain to eliminate the compensation error when the delay is not an integer multiple of the sampling interval, which improves the separation performance significantly. The simulation results show that the proposed algorithm can adequately solve the problem of delay mismatch and achieve wideband blind source separation effectively. The existing algorithms are mostly fail for frequency hopping signals when there are numerous overlapping time–frequency points. In this case, the proposed algorithm still has good separation performance.

  相似文献   

19.
Network on chip (NoC) has emerged as a solution to overcome the system on chip growing complexity and design challenges. A proper routing algorithm is a key issue of an NoC design. An appropriate routing method balances load across the network channels and keeps path length as short as possible. This survey investigates the performance of a routing algorithm based on Hopfield Neural Network. It is a dynamic programming to provide optimal path and network monitoring in real time. The aim of this article is to analyse the possibility of using a neural network as a router. The algorithm takes into account the path with the lowest delay (cost) form source to destination. In other words, the path a message takes from source to destination depends on network traffic situation at the time and it is the fastest one. The simulation results show that the proposed approach improves average delay, throughput and network congestion efficiently. At the same time, the increase in power consumption is almost negligible.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号