首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 875 毫秒
1.
在高斯指纹系统中,指纹的嵌入与检测须服从均方失真限制,以获得较好的保真度。根据概率统计理论,分析了基于阈值相关检测的传统指纹方案中指纹编码速率、均方失真限制以及共谋人数之间的关系,并指出阈值相关检测方法的缺点,即当编码速率大于容量范围的某个值时检测器性能较差。为了解决容量限制的问题,提出一种在限定均方误差的条件下取得数字指纹基本容量的高斯指纹方案。利用互信息游戏理论,提出最大惩罚高斯互信息的指纹检测方法,有效地解决了传统指纹检测方法存在的问题。根据指纹容量的数学模型,推导出指纹容量的表达式。  相似文献   

2.
在H.264/AVC标准中,基于上下文的自适应可变长编码(CAVLC)解码算法的复杂度较高。为此,提出一种基于熵解码算法的新型熵解码器,在对视频压缩码流实现熵解码的过程中,引入并行处理方式,并改进二叉树法。通过采用QuartusⅡ7.2版环境波形仿真和FPGA硬件实现方法进行实验,结果表明该熵解码器在硬件资源节省和解码速度方面具有较好的性能。  相似文献   

3.
钱宏  李广侠  常江 《计算机应用》2011,31(4):1145-1147
全球定位系统(GPS)在其现代化计划中选择低密度奇偶校验(LDPC)码作为其将来的L1C电文的信道编码方案,能够获得优异的译码性能,但复杂度也相对提高,所采用随机LDPC码的编码器和解码器的硬件实现较为困难。在802.16e协议中提出的LDPC码的基础上,提出一种增强型的准循环低密度奇偶校验(QC-LDPC) 码,其校验矩阵同时具有准循环结构和近似下三角结构,且最小圈长为8,克服了随机LDPC码的缺点。仿真结果表明,所构造的QC-LDPC码性能优于802.16e协议中的LDPC码和GPS L1C电文中采用的LDPC码,对我国“COMPASS”导航系统的信道编码方案具有参考价值。  相似文献   

4.
Successive cancellation (SC) is a low complexity serial decoding algorithm for polar codes, and successive cancellation list (SCL) can achieve excellent error-correcting performance. However, SCL decoder suffers from long decoding latency compared with belief propagation (BP) decoder. In this paper, a low-latency list decoder whose latency performance can approach that of BP deocder is proposed. A prunable subtree recognizing scheme based on H-Matrix check is proposed by taking the reliability of frozen bits into account. Then, a latency-reduced list decoder based on the prunable constituent codes is proposed. Simulation results show that the decoding latency of proposed list scheme can be reduced significantly, especially for high signal noise ratio (SNR) region.  相似文献   

5.
In this paper, we show how the Gaussian mixture modeling framework used to develop efficient source encoding schemes can be further exploited to model source statistics during channel decoding in an iterative framework to develop an effective joint source-channel decoding scheme. The joint probability density function (PDF) of successive source frames is modeled as a Gaussian mixture model (GMM). Based on previous work, the marginal source statistics provided by the GMM is used at the encoder to design a low-complexity memoryless source encoding scheme. The source encoding scheme has the specific advantage of providing good estimates to the probability of occurrence of a given source code-point based on the GMM. The proposed iterative decoding procedure works with any channel code whose decoder can implement the soft-output Viterbi algorithm that uses a priori information (APRI-SOVA) or the BCJR algorithm to provide extrinsic information on each source encoded bit. The source decoder uses the GMM model and the channel decoder output to provide a priori information back to the channel decoder. Decoding is done in an iterative manner by trading extrinsic information between the source and channel decoders. Experimental results showing improved decoding performance are provided in the application of speech spectrum parameter compression and communication.  相似文献   

6.
针对频谱感知和多载波CDMA信号解调的实际应用,根据多载波CDMA信号的循环平稳特性,提出了一种利用高阶循环累积量估计多载波CDMA信号子载波频率的方法。由于高阶循环累积量可以有效地抑制平稳噪声和非平稳高斯噪声,通过理论分析可以证明在上述噪声背景下,子载波采用BPSK调制的多载波CDMA信号的四阶循环累积量仅在循环频率为子载波频率处存在,可以通过检测此循环频率来实现子载波的估计。考虑到多载波CDMA信号发端可以采用不同的窗函数以降低频谱泄露,以常见的几种窗函数为例进行了算法仿真,发现本算法对窗函数的变化不  相似文献   

7.
针对高清晰数字电视应用,提出了一种针对H.264标准的CABAC硬件解码器结构。通过高效的SRAM组织,在提高解码器访问SRAM的效率的同时减小了SRAM的面积。高效的解码器架构设计,使得每一个时钟周期可解码1bit的语法元素,与软件和现有解码器相比提高了解码速度。可以通过全硬件的方式解码基于主规范(main profile)的H.264码流,满足高清晰数字电视的要求。  相似文献   

8.

This paper presents an improved image-adaptive watermarking technique. Two image watermarks are embedded in the high entropy 8?×?8 blocks of the host image. DWT is applied on these blocks using the principle of sub band coding. This decomposes the high entropy blocks into four sub band coefficients, wherein the approximation and vertical frequency coefficients are modeled using Gaussian (or Normal) distribution. The two watermarks are inserted in the host image using Adjustable Strength Factor (ASF). It is calculated adaptively using the fourth statistical moment known as kurtosis. A limited side information is also transmitted along with the watermarked image. This side information consists of high entropy block positions and Gaussian distribution parameters. To extract both watermarks from the received watermarked image, the high entropy block positions sent in the side information help in applying DWT to calculate the approximation and vertical frequency coefficients. Gaussian (or Normal) distribution is similarly used for modeling and calculating the distribution parameters. This helps the Maximum Likelihood (ML) decoder to recover the watermarks successfully using a statistical approach. Two important contributions are presented in this paper. Firstly, adjustable kurtosis values are used which improves the capacity and robustness of the proposed technique. Secondly, the proposed work is implemented on medical applications and gives better performance as compared to the existing methods. Further, the efficiency of the proposed work is evaluated by better simulation results using PSNR, NCC, SSIM and GMSD under different attacks. The technique is highly robust as watermarks survive under different attacks. This increases security and ensures copyright protection.

  相似文献   

9.
Digital fingerprinting is a technique to deter unauthorized redistribution of multimedia content by embedding a unique identifying signal in each legally distributed copy. The embedded fingerprint can later be extracted and used to trace the originator of an unauthorized copy. A group of users may collude and attempt to create a version of the content that cannot be traced back to any of them. As multimedia data is commonly stored in compressed form, this paper addresses the problem of fingerprinting compressed signals. Analysis is carried out to show that due to the quantized nature of the host signal and the embedded fingerprint, directly extending traditional fingerprinting techniques for uncompressed signals to the compressed case leads to low collusion resistance. To overcome this problem and improve the collusion resistance, a new technique for fingerprinting compressed signals called Anti-Collusion Dither (ACD) is proposed, whereby a random dither signal is added to the compressed host before embedding so as to make the effective host signal appear more continuous. The proposed technique is shown to reduce the accuracy with which attackers can estimate the host signal, and from an information theoretic perspective, the proposed ACD technique increases the maximum number of users that can be supported by the fingerprinting system under a given attack. Both analytical and experimental studies confirm that the proposed technique increases the probability of identifying a guilty user and can approximately quadruple the collusion resistance compared to conventional Gaussian fingerprinting.   相似文献   

10.
季策  靳超y  张颍 《控制与决策》2020,35(3):651-656
为实现多高斯源和相关源信号的盲分离,在快速近似联合对角化(FAJD)算法的基础上,将故障诊断领域的时变自回归理论成功地应用于相关源信号的盲分离和多高斯源信号的盲分离.首先采用时变自回归模型(TVAR)对源信号建模,并通过白化预处理使得建模后的源信号具有可联合对角化的结构;然后,通过基函数加权和的方法将时变参数近似为已知基函数的加权和的形式,将其变成时不变的参数,再通过递推最小二乘法求解出模型系数矩阵组;最后,将所求出的系数矩阵组作为快速近似联合对角化的目标矩阵组,通过FAJD算法实现混合信号的分离.Matlab仿真实验验证了所提出的算法对于相关源信号和多高斯源信号的分离是行之有效的.由于算法中TVAR模型的优良特性,此算法非常适用于混合通信信号的盲分离.  相似文献   

11.
提出了一种基于匹配滤波的有意义水印盲提取算法,通过构造基于特征子空间的正交水印信号集,水印的提取不需要原始图象,而且误码率可以控制在较低的水平,基于特征子空间分解特性,具有简单有效的特点,同时由于水印信号集由原始图象唯一生成,并具有一定的不可逆性,因此可以防止水印攻击者利用非法产生的水印集来伪造有意义的水印。实验结果证明了该算法具有较好的稳健性和不可见性,为了对提取字符的可靠性进行评价,还提出了一种置信度门限,利用置信度门限可以有效地评估单个字符及整体水印的可靠性,弥补了一些算法只提取有意义水印,而不加评估的缺点。  相似文献   

12.
徐启迪  刘争红  郑霖 《计算机应用》2022,42(12):3841-3846
随着通信技术的发展,通信终端逐渐采用软件的方式来兼容多种通信制式和协议。针对以计算机中央处理器(CPU)作为运算单元的传统软件无线电架构,无法满足高速无线通信系统如多进多出(MIMO)等宽带数据的吞吐率要求问题,提出了一种基于图形处理器(GPU)的低密度奇偶校验(LDPC)码译码器的加速方法。首先,根据GPU并行加速异构计算在GNU Radio 4G/5G物理层信号处理模块中的加速表现的理论分析,采用了并行效率更高的分层归一化最小和(LNMS)算法;其次,通过使用全局同步策略、合理分配GPU内存空间以及流并行机制等方法减少了译码器的译码时延,同时配合GPU多线程并行技术对LDPC码的译码流程进行了并行优化;最后,在软件无线电平台上对提出的GPU加速译码器进行了实现与验证,并分析了该并行译码器的误码率性能和加速性能的瓶颈。实验结果表明,与传统的CPU串行码处理方式相比,CPU+GPU异构平台对LDPC码的译码速率可提升至原来的200倍左右,译码器的吞吐量可以达到1 Gb/s以上,特别是在大规模数据的情况下对传统译码器的译码性有着较大的提升。  相似文献   

13.
以Tilepro64多核处理器为核心,设计了面向无线信号数据采集处理的软件无线电系统平台。无线中频信号通过以FPGA为核心的模拟前端的放大、滤波和下变频等处理,进入基于Tilepro64处理器的数字处理模块,经过解调和解码等步骤,将信息由XAUI接口传往上位机。  相似文献   

14.
Because layered low‐density parity‐check (LDPC) decoding algorithm was proposed, one can exploit the diversity gain to achieve performance comparable to the traditional two‐phase message passing (TPMP) decoding but with about twice faster decoding convergence compared to TPMP. In order to reduce the decoding time of layered LDPC decoder, a graphics processing unit (GPU) is exploited as the modem processor so that the decoding procedure can be processed in parallel using numerous threads in the GPU. In this paper, we present the parallel algorithms and efficient implementations on the GPU for two different layered message passing schemes, the row‐layered and column‐layered decoding. In the experiments, the quasicyclic LDPC codes for WiFi (802.11n) and WiMAX (802.16e) are decoded by the proposed layered LDPC decoders. The experimental results show that our decoder has good bit error ratio (BER) performance comparable to TPMP decoder. The peak throughput is 712 Mbps, which is about two orders of magnitude faster than that of CPU implementation and comparable to the dedicated hardware solutions. Compared to the existing fastest GPU‐based implementation, the presented decoder can achieve a performance improvement of 2.3 times. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

15.
In this article, a modified complex-valued FastICA algorithm is utilized to extract the specific feature of the Gaussian noise component from mixtures so that the estimated component is as independent as possible to the other non-Gaussian signal components. Once the noise basis vector is obtained, we can estimate direction of arrival by searching the array manifold for direction vectors, which are as orthogonal as possible to the estimated noise basis vector especially for highly correlated signals with closely spaced direction. Superior resolution capabilities achieved with the proposed method in comparison with the conventional multiple signal classification (MUSIC) method, the spatial smoothing MUSIC method, and the signal subspace scaled MUSIC method are shown by simulation results.  相似文献   

16.
In this paper, a new scaling-based image-adaptive watermarking system has been presented, which exploits human visual model for adapting the watermark data to local properties of the host image. Its improved robustness is due to embedding in the low-frequency wavelet coefficients and optimal control of its strength factor from HVS point of view. Maximum likelihood (ML) decoder is used aided by the channel side information. The performance of the proposed scheme is analytically calculated and verified by simulation. Experimental results confirm the imperceptibility of the proposed method and its higher robustness against attacks compared to alternative watermarking methods in the literature.  相似文献   

17.
Asifullah  Syed Fahad  Abdul  Tae-Sun   《Pattern recognition》2008,41(8):2594-2610
We present an innovative scheme of blindly extracting message bits when a watermarked image is distorted. In this scheme, we have exploited the capabilities of machine learning (ML) approaches for nonlinearly classifying the embedded bits. The proposed technique adaptively modifies the decoding strategy in view of the anticipated attack. The extraction of bits is considered as a binary classification problem. Conventionally, a hard decoder is used with the assumption that the underlying distribution of the discrete cosine transform coefficients do not change appreciably. However, in case of attacks related to real world applications of watermarking, such as JPEG compression in case of shared medical image warehouses, these coefficients are heavily altered. The sufficient statistics corresponding to the maximum likelihood based decoding process, which are considered as features in the proposed scheme, overlap at the receiving end, and a simple hard decoder fails to classify them properly. In contrast, our proposed ML decoding model has attained highest accuracy on the test data. Experimental results show that through its training phase, our proposed decoding scheme is able to cope with the alterations in features introduced by a new attack. Consequently, it achieves promising improvement in terms of bit correct ratio in comparison to the existing decoding scheme.  相似文献   

18.
A multi-bit decision for polar codes based on a simplified successive cancellation (SSC) decoding algorithm can improve the throughput of polar decoding. A list algorithm is used to improve the error-correcting performance. However, list decoders are highly complex compared with decoders without a list algorithm. In this paper, a low-complexity list decoder is proposed, where path-splitting operations for a multi-bit decision can be avoided, if the decoding reliability exceeds a threshold. The threshold is determined based on the reliability of subchannels and positions of decoding nodes. Path splitting rules are designed for multi-bit decision processes, and a complexity-reduced list decoder is proposed based on this. Results show that the number of survival paths can be greatly reduced at the cost of negligible deterioration in block error performance. Thus, the computational complexity can be significantly reduced, especially for a high signal-to-noise ratio (SNR) region.  相似文献   

19.
In this paper, an efficient algorithm is proposed to improve the decoding efficiency of the context-based adaptive variable length coding (CAVLC) procedure. Due to the data dependency among symbols in the decoding flow, the CAVLC decoder requires large computation time, which dominates the overall decoder system performance. To expedite its decoding speed, the critical path in the CAVLC decoder is first analyzed and then reduced by forwarding the adaptive detection for succeeding symbols. With a shortened critical path, the CAVLC architecture is further divided into two segments, which can be easily implemented by a pipeline structure. Consequently, the overall performance is effectively improved. In the hardware implementation, a low power combined LUT and single output buffer have been adopted to reduce the area as well as power consumption without affecting the decoding performance. Experimental results show that the proposed architecture surpassing other recent designs can approximately reduce power consumption by 40% and achieve three times decoding speed in comparison to the original decoding procedure suggested in the H.264 standard. The maximum frequency can be larger than 210 MHz, which can easily support the real-time requirement for resolutions higher than the HD1080 format.  相似文献   

20.
针对高效LDPC译码器设计过程中的参数选择问题,提出了针对Turbo译码消息传播(Turbo decoding message passing,TDMP)译码算法的离散密度进化算法。利用这种离散密度进化算法对译码算法中的校正因子及量化精度进行了优化。与传统的通过数值仿真进行优化的方法相比,本文算法效率大大提高,且效果显著。测试结果表明,优化的定点化译码器与纯浮点仿真相比性能只相差0.1 dB左右。在译码器实现结构设计中提出了一种基于分布式RAM的P消息循环存储结构,与传统的基于寄存器和Benes网络的存储器结构相比,资源消耗明显下降。在Xilinx公司的FPGA平台上进行了硬件实现与测试,结果表明与同类译码器相比在资源消耗和吞吐率上均有一定优势,是一种高效的LDPC硬件译码器。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号