首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
H.264标准中熵编码由指数哥伦布与基于上下文自适应的可变长编码(CAVLC)或基于上下文自适应的算术编码(CABAC)构成,本文全面介绍了H.264视频压缩标准中熵编码以及指数哥伦布的基本原理,重点阐述了指数哥伦布编码器的硬件实现方法并给出了硬件综合结果。  相似文献   

2.
AVS中可变长解码器的硬件设计   总被引:1,自引:0,他引:1  
刘群鑫 《现代电子技术》2007,30(23):185-187,194
AVS是我国自主制定的音视频编码技术标准。简要介绍AVS标准视频压缩部分的特点,重点研究AVS可变长熵解码的原理和技术方法并进行优化,主要采用并行解码结构以达到实时解码。在此基础上提出了一种针对AVS视频编码标准的变长码——指数哥伦布码解码的硬件设计结构,最后给出实现该硬件结构对应FPGA实验仿真结果。  相似文献   

3.
H.264/AVC是ITU—T和ISO/IEC两大国际组织共同制定的新一代视频压缩编码标准。指数哥伦布编码是熵编码的重要主要组成模块。对从码流中提取的句法元素进行编码。它与基于上下文自适应的可变长编码(CAVLC)或基于上下文自适应的算术编码(CABAC)共同构成了熵编码。本文全面介绍了指数哥伦布的基本原理及本设计中指数哥伦布编码的句法元素,重点阐述了指数哥伦布编码器的硬件实现方法。实验结果表明,该设计能满足分辨率为1920×1080的视频序列的实时编码对质量和速度的要求。  相似文献   

4.
H.264是具有极高压缩率、较好的IP和无线网络信道适应性的最新视频编解标准,本文研究了其中的指数哥伦布解码的算法以及其中解码的硬件实现方式。在实际芯片设计过程中,可以根据不同的性能指标要求来选择合适的硬件电路。  相似文献   

5.
H.264视频压缩标准凭借高压缩比和较好的图像质量,已经作为一种新型的标准被广泛接受。由于H.264的解码复杂度很高,软件实现难以满足实时性的要求,所以需要采用硬件解码。本文提出了一种针对H.264视频编码标准的可变长指数哥伦布码解码的硬件设计结构,给出了一种系统解码时间消耗与系统资源占用较少的硬件设计方案,最后给出了设计最终的仿真以及后端设计的结果。  相似文献   

6.
首先介绍了当前最新的几种视频编码标准并且进行了比较,熵编码是每一个视频编码标准必须认真研究的课题,为了减少由于失配所带来的效率损失,本文提出了一种自适应编码技术:自适应指数哥伦布码,并与自适应算术编码进行了比较.分析和仿真都表明即使信源特性在大范围内发生变化,自适应指数哥伦布码对于保持高的编码效率是足够稳健的(90%以上的情况),同时保持了指数哥伦布码和哥伦布-莱期码的简洁性.  相似文献   

7.
H.264指数哥伦布码解码部件的硬件设计和实现   总被引:5,自引:3,他引:2  
姚栋  虞露 《电视技术》2004,(11):14-16,23
提出了一种针对H.264视频编码标准的变长码-指数哥伦布码解码的硬件设计结构,对传统的桶形移位器进行优化,主要采用基于PLA的并行解码算法以达到实时解码,同时辅助使用串行解码算法降低硬件资源消耗,保证在能够对符合H.264标准baseline Profile的码流实时解码的基础上优化了电路资源,给出实现该硬件结构对应的FPGA仿真结果及其ASIC硬件规模.  相似文献   

8.
陈海燕 《电视技术》2012,36(5):18-19,48
在CABAC中,较大的MV和Level首先二进制化为指数—哥伦布码,然后由算术编码引擎作为等概率符号逐位编码。指数-哥伦布码由前导1部分和定长编码部分组成。针对前导1部分和定长编码部分分别提出一种多二进制位并行解码算法。实验结果表明该算法相比逐位解码可以提供多达2.5倍的加速。  相似文献   

9.
基于FPGA的AVS熵解码模块的设计与实现   总被引:1,自引:0,他引:1  
提出了一种适用于AVS的熵解码模块的VKSI实现方案。针对解码速度、实现复杂度及系统模块间的协作问题,给出了一种减少解码时间和系统资源占用的硬件实现方法。  相似文献   

10.
周健 《电子工程师》2010,36(9):17-19
在高速数据通信系统中,为降低系统的复杂度,通常使用差分编解码。本文在介绍准正交酉空时调制编解码原理的基础上,给出了编解码器的硬件实现方案和流程图,并用C与汇编语言在TMS3205416 DSP上进行了编解码器的实现。通过RTDX实时调试环境测出的结果表明,硬件条件下得出的误比特率与MATLAB仿真的结果相近,验证了DSP环境下编解码器的设计符合实际应用。且文中提出的简化的解码算法减少了DSP的解码执行时间。  相似文献   

11.
We propose new multistage interconnection networks (MIN) for scalable parallel Viterbi decoder architectures. The architecture consists of the desired number of processing elements (PE) connected by the suggested MINs, thus allowing a tradeoff between complexity and speed. The structure of the MIN is derived first by transforming the de Bruijn interconnection-based Viterbi algorithm trellis into the equivalent trellis with a perfect shuffle interconnection, and then applying a new decomposition of the perfect shuffle operator. This results in an efficient modular system and data flow is formed by the shuffling in a local PE memory and data exchange through a fixed interconnection between PEs. We suggest several solutions for 1/n and k/n rate codes, where k denotes the number of input bits shifting into k shift registers of the encoder and, at each cycle, the encoder produces n output bits as linear combinations of certain bits in the shift registers.  相似文献   

12.
针对AAC(Advanced Audio Coding)解码器中具有复杂运算因而难以硬件实现的反量化模块和知觉噪音替代(Perceptual Noise Substitution,PNS)模块,提出用分段线性插值(Partition and Linear Interpolation,PLI)进行近似计算的方法.其基本原理是将复杂的曲线分割成多个区间,在每个区间内用线性插值的方法进行近似.利用该方法设计的反量化模块和PNS解码模块获得了较高的运算精度,最大运算误差率分别为2e-4和5e-3,全部运算可分解为简单的查表、乘法、加法和移位运算,易于硬件实现.FPGA验证结果表明,反量化模块仅耗用了45个逻辑单元和2 304ROM bits,PNS解码模块仅耗用了21个逻辑单元和144ROM bits,且解码音质良好,没有产生可感知的噪音.  相似文献   

13.
本论文用可编程逻辑器件(FPGA)实现了一种低密度奇偶校验码(LDPC)的编译码算法.采用基于Q矩阵LDPC码构造方法,设计了具有线性复杂度的编码器. 基于软判决译码规则,采用全并行译码结构实现了码率为1/2、码长为40比特的准规则LDPC码译码器,并且通过了仿真测试.该译码器复杂度与码长成线性关系,与Turbo码相比更易于硬件实现,并能达到更高的传输速率.  相似文献   

14.
A sequence y=(y/sub 1/,...,y/sub n/) is said to be a coarsening of a given finite-alphabet source sequence x=(x/sub 1/,...,x/sub n/) if, for some function /spl phi/, y/sub i/=/spl phi/(x/sub i/) (i=1,...,n). In lossless refinement source coding, it is assumed that the decoder already possesses a coarsening y of a given source sequence x. It is the job of the lossless refinement source encoder to furnish the decoder with a binary codeword B(x|y) which the decoder can employ in combination with y to obtain x. We present a natural grammar-based approach for finding the binary codeword B(x|y) in two steps. In the first step of the grammar-based approach, the encoder furnishes the decoder with O(/spl radic/nlog/sub 2/n) code bits at the beginning of B(x|y) which tell the decoder how to build a context-free grammar G/sub y/ which represents y. The encoder possesses a context-free grammar G/sub x/ which represents x; in the second step of the grammar-based approach, the encoder furnishes the decoder with code bits in the rest of B(x|y) which tell the decoder how to build G/sub x/ from G/sub y/. We prove that our grammar-based lossless refinement source coding scheme is universal in the sense that its maximal redundancy per sample is O(1/log/sub 2/n) for n source samples, with respect to any finite-state lossless refinement source coding scheme. As a by-product, we provide a useful notion of the conditional entropy H(G/sub x/|G/sub y/) of the grammar G/sub x/ given the grammar G/sub y/, which is approximately equal to the length of the codeword B(x|y).  相似文献   

15.
In some video coding applications, it is desirable to reduce the complexity of the video encoder at the expense of a more complex decoder. Wyner–Ziv (WZ) video coding is a new paradigm that aims to achieve this. To allocate a proper number of bits to each frame, most WZ video coding algorithms use a feedback channel, which allows the decoder to request additional bits when needed. However, due to these multiple bit requests, the complexity and the latency of WZ video decoders increase massively. To overcome these problems, in this paper we propose a rate allocation (RA) algorithm for pixel-domain WZ video coders. This algorithm estimates at the encoder the number of bits needed for the decoding of every frame while still keeping the encoder complexity low. Experimental results show that, by using our RA algorithm, the number of bit requests over the feedback channel—and hence, the decoder complexity and the latency—are significantly reduced. Meanwhile, a very near-to-optimal rate-distortion performance is maintained. This work has been partially supported by the Spanish Ministry of Education and Science and the European Commission (FEDER) under grant TEC2005-07751-C02-01. A. Pižurica is a postdoctoral research fellow of FWO, Flanders.  相似文献   

16.
高码率自适应Turbo编译码器的设计与FPGA实现   总被引:1,自引:1,他引:0  
提出了一种高码率自适应Turbo编译码器的FPGA实现方案。在编码模块中采用特定参数的分组螺旋对称交织器,使编码器能通过删余构造高码率,且能通过相同的结尾比特使两个分量编码器的寄存器状态均归零。在SOVA译码模块中,各状态下路径的累积度量值的并行计算和可靠性值的并行更新使译码速度大大提高。仿真结果表明,该高码率自适应编译码器有良好的误码性能和较高的实用价值。  相似文献   

17.
A source of random message bits is to be embedded into a covertext modeled as a discrete memoryless source (DMS), resulting in a stegotext from which the embedded bits should be recoverable. A causal code for such a scenario consists of an encoder that generates the stegotext as a causal function of the message bits and the covertext, and a decoder that reproduces the message bits as a causal function of the stegotext. A semicausal code, on the other hand, has an encoder that is causal only with respect to the covertext, and not necessarily with respect to the message, and has a possibly noncausal decoder. We analyze the possible tradeoffs among: a) the distortion between the stegotext and the covertext, b) the compressibility of the stegotext, and c) the rate at which random bits are embedded, that are achievable with causal and semicausal codes, with and without attacks on the stegotext. We also study causal and semicausal codes for the private version of the above scenario in which the decoder has access to the covertext. Connections are made with the causal rate-distortion function of Neuhoff and Gilbert, as well as the problem of channel coding with causal side information at the transmitter analyzed by Shannon.  相似文献   

18.
Parallel decoding is required for low density parity check (LDPC) codes to achieve high decoding throughput, but it suffers from a large set of registers and complex interconnections due to randomly located 1's in the sparse parity check matrix. This paper proposes a new LDPC decoding architecture to reduce registers and alleviate complex interconnections. To reduce the number of messages to be exchanged among processing units (PUs), two data flows that can be loosely coupled are developed by allowing duplicated operations. In addition, intermediate values are grouped and stored into local storages each of which is accessed by only one PU. In order to save area, local storages are implemented using memories instead of registers. A partially parallel architecture is proposed to promote the memory usage and an efficient algorithm that schedules the processing order of the partially parallel architecture is also proposed to reduce the overall processing time by overlapping operations. To verify the proposed architecture, a 1024 bit rate-1/2 LDPC decoder is implemented using a 0.18-/spl mu/m CMOS process. The decoder runs correctly at the frequency of 200 MHz, which enables almost 1 Gbps decoding throughput. Since the proposed decoder occupies an area of 10.08 mm/sup 2/, it is less than one fifth of area compared to the previous architecture.  相似文献   

19.
We propose a joint source-channel coding algorithm capable of correcting some errors in the popular Lempel-Ziv'77 (LZ'77) scheme without introducing any measurable degradation in the compression performance. This can be achieved because the LZ'77 encoder does not completely eliminate the redundancy present in the input sequence. One source of redundancy can be observed when an LZ'77 phrase has multiple matches. In this case, LZ'77 can issue a pointer to any of those matches, and a particular choice carries some additional bits of information. We call a scheme with embedded redundant information the LZS'77 algorithm. We analyze the number of longest matches in such a scheme and prove that it follows the logarithmic series distribution with mean 1/h (plus some fluctuations), where h is the source entropy. Thus, the distribution associated with the number of redundant bits is well concentrated around its mean, a highly desirable property for error correction. These analytic results are proved by a combination of combinatorial, probabilistic, and analytic methods (e.g., Mellin transform, depoissonization, combinatorics on words). In fact, we analyze LZS'77 by studying the multiplicity matching parameter in a suffix tree, which in turn is analyzed via comparison to its independent version, called trie. Finally, we present an algorithm in which a channel coder (e.g., Reed-Solomon (RS) coder) succinctly uses the inherent additional redundancy left by the LZS'77 encoder to detect and correct a limited number of errors. We call such a scheme the LZRS'77 algorithm. LZRS'77 is perfectly backward-compatible with LZ'77, that is, a file compressed with our error-resistant LZRS'77 can still be decompressed by a generic LZ'77 decoder  相似文献   

20.
介绍了音频编解码器TLV320AIC23芯片的功能,相关寄存器的设置和采样率的选取,通过分析提出了用CPLD代替DSP,进行TLV320AIC23与CPLD的接口设计,实现数字语音设备学生机终端的A/D及D/A转换。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号