首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
LZW压缩数据的不均一两分段误码保护编码   总被引:3,自引:1,他引:2  
唐红  许鸿川 《电讯技术》2002,42(1):86-90
由T.A.Welch提出的LZW数据压缩方法已经被广泛地应用于文字压缩。然而,因为误码对压缩数据的危害很大,所以对压缩数据的纠错编码是十分重要且必不可少的。本文分析了误码对LZW压缩数据的影响,指出同样的误码对用于重建辞书的前一部分压缩数据的危害比对后面的其它数据的危害要严重得多,提出用一种不均一两分段误码保护编码方法,对用于重建辞书的前一部分压缩数据进行更好地保护。计算机模拟显示,该方法比传统的误码保护方法更有效。  相似文献   

2.
The Ziv-Lempel compression algorithm is a string matching and parsing approach to data compression. The symbolwise equivalent for parsing models has been defined by Rissanen and Langdon and gives the same ideal codelength at the same cost in coding parameters. By describing the context and coding parameter for each symbol an insight is provided into how the Ziv-Lempel method achieves compression. This treatment does not employ a probabilistic source for the data string. The Ziv-Lempel method effectively counts symbol instances within parsed phrases. The coding parameter for each symbolwise context is determined by cumulative count ratios. The code string length increase for a symbolyfollowing substrings, under the symbolwise equivalent, is the log of the ratio of node counts in subtreessandscdot yof the Ziv-Lempel parsing tree. To demonstrate the symbolwise equivalent of the Ziv-Lempel algorithm, we extend the work of Rissanen and Langdon to incomplete parse trees. The result requires the proper handling of the comma when one phrase is the prefix of another phrase.  相似文献   

3.
针对现有压缩算法通过增加复杂度来降低压缩率,获得信息高效传输的问题。该文提出阵列配置加速比模型,证明低压缩率不一定能提高传输效率,并找到影响信息传输效率的因子,即解压模块吞吐率和数据块压缩率。将影响因子与配置信息特征结合,设计了一种新的无损压缩算法,并硬件实现了解压模块,吞吐率可达到16.1 Gbps。采用AES, A5-1和SM4对无损压缩算法进行测试,然后与主流无损压缩算法LZW, Huffman, LPAQ1和Arithmetic对比。结果表明,整体压缩率相当,但该文压缩算法产生的数据块压缩率经过优化,不仅能满足加速需求,且具有高吞吐率的解压性能;该文无损压缩算法获得的配置加速比,比硬件吞吐率理想情况下的LPAQl, Arithmetic, Huffman, LZW算法分别高8%, 9%, 10%, 22%左右。  相似文献   

4.
无损数据压缩系统在通信传输过程中容易出现错误,会导致码表和重构数据出错并引发误码扩散,影响其在文件系统和无线通信中的应用。针对在通用编码领域广泛使用的无损数据压缩算法LZW,该文分析并利用LZW压缩数据的冗余,通过选取部分编码码字并动态调整其对应的被压缩符号串的长度来携带校验码,提出了具有误码纠正能力的无损数据压缩方法CLZW。该方法不用额外添加数据,也不改变数据规格和编码规则,与标准LZW算法兼容。实验结果表明,用该方法压缩的文件仍然能用标准LZW解码器解压,且该方法可以对LZW压缩数据的误码进行有效纠正。  相似文献   

5.
基于工程实践的要求,需要将数据压缩后进行远程传输。本文介绍了一种LZW算法,同时对理论算法进行改进,运用了数据结构中的链表结构,使其适合在硬件资源上得以实现。为了提高码表搜索的速度,最终设计出一种以FPGA为硬件资源基于地址查询的查找表,使其压缩算法应用于工程实践。  相似文献   

6.
A novel algorithm for text compression is developed which consists of two parts. In the first part the text sample is encoded by the Ziv-Lempel asymptotic structured codebook. In the second part decoding is carried out by a novel rule based algorithm which improves the compression ratio for a given text sample by up to 50% when compared to the full text search as required by the Ziv-Lempel decoding algorithm.<>  相似文献   

7.
In this paper, we propose a new two-stage hardware architecture that combines the features of both parallel dictionary LZW (PDLZW) and an approximated adaptive Huffman (AH) algorithms. In this architecture, an ordered list instead of the tree-based structure is used in the AH algorithm for speeding up the compression data rate. The resulting architecture shows that it not only outperforms the AH algorithm at the cost of only one-fourth the hardware resource but it is also competitive to the performance of LZW algorithm (compress). In addition, both compression and decompression rates of the proposed architecture are greater than those of the AH algorithm even in the case realized by software  相似文献   

8.
In this paper, a parallel dictionary based LZW algorithm called PDLZW algorithm and its hardware architecture for compression and decompression processors are proposed. In this architecture, instead of using a unique fixed-word-width dictionary a hierarchical variable-word-width dictionary set containing several dictionaries of small address space and increasing word widths is used for both compression and decompression algorithms. The results show that the new architecture not only can be easily implemented in VLSI technology because of its high regularity but also has faster compression and decompression rate since it no longer needs to search the dictionary recursively as the conventional implementations do.  相似文献   

9.
We propose a new class of methods for VLIW code compression using variable-sized branch blocks with self-generating tables. Code compression traditionally works on fixed-sized blocks with its efficiency limited by their small size. A branch block, a series of instructions between two consecutive possible branch targets, provides larger blocks for code compression. We compare three methods for compressing branch blocks: table-based, Lempel-Ziv-Welch (LZW)-based and selective code compression. Our approaches are fully adaptive and generate the coding table on-the-fly during compression and decompression. When encountering a branch target, the coding table is cleared to ensure correctness. Decompression requires a simple table lookup and updates the coding table when necessary. When decoding sequentially, the table-based method produces 4 bytes per iteration while the LZW-based methods provide 8 bytes peak and 1.82 bytes average decompression bandwidth. Compared to Huffman's 1 byte and variable-to-fixed (V2F)'s 13-bit peak performance, our methods have higher decoding bandwidth and a comparable compression ratio. Parallel decompression could also be applied to our methods, which is more suitable for VLIW architectures.  相似文献   

10.
提出了一种基于FPGA的验光仪的数据实时无损压缩系统,采用LZW算法。首先通过对比分析常用数据无损压缩算法的特点得出LZW算法在实时性、实现复杂度、所需的存储容量、算法的压缩效果和适用的场合方面都有不错的特点,因此以它作为硬件实现的算法。此数据实时无损压缩系统由数据实时无损压缩硬件电路、测试软件、解压软件与读数软件组成,其中数据实时无损压缩硬件电路由数据采集、数据压缩、控制单元、数据存储、电源管理等几部分组成,核心器件是FPGA,利用FPGA芯片内部的RAM资源构成输入数据的缓存器以及LZW算法所需的2个字典存储器,并结合有利于硬件实现的字典管理策略完成了实时无损压缩,同时FPGA还负责对模数转换器、闪存的控制等功能。结果表明该方案所占逻辑资源较少、可移植性强、功能扩展容易,数据的存储和传输效率提高了20%,成本降低了13%。  相似文献   

11.
LZW改进压缩算法的FPGA实现   总被引:1,自引:0,他引:1  
赵双龙  郝永生 《现代电子技术》2011,34(3):110-111,114
随着实时监控系统的发展,大容量高速数据采集与传输技术不断取得新的进展,针对当前数据传输采用硬件实现速度快,但难以进行数据处理,而软件能实现很多算法但处理速度稍显逊色的不足,采用了LZW压缩算法及其改进算法,并将该算法在可编程逻辑器件FPGA上进行了实现,通过仿真,验证了设计的正确性,提高了数据传输速度。  相似文献   

12.
Results concerning the celebrated Ziv-Lempel sequence compression algorithm are revisited taking a rather intuitive approach. Also presented are ideas, which were previously formalized and extensions of these results to compression of two-dimensional data  相似文献   

13.
Context modeling is widely used in image coding to improve the compression performance. However, with no special treatment, the expected compression gain will be cancelled by the model cost introduced by high order context models. Context quantization is an efficient method to deal with this problem. In this paper, we analyze the general context quantization problem in detail and show that context quantization is similar to a common vector quantization problem. If a suitable distortion measure is defined, the optimal context quantizer can be designed by a Lloyd style iterative algorithm. This context quantization strategy is applied to an embedded wavelet coding scheme in which the significance map symbols and sign symbols are directly coded by arithmetic coding with context models designed by the proposed quantization algorithm. Good coding performance is achieved.  相似文献   

14.
15.
In this paper,the dynamic control approaches for spectrum sensing are proposed,based on the theory that prediction is synonymous with data compression in computational learning. Firstly,a spectrum sensing sequence prediction scheme is proposed to reduce the spectrum sensing time and improve the throughput of secondary users. We use Ziv-Lempel data compression algorithm to design the prediction scheme,where spectrum band usage history is utilized. In addition,an iterative algorithm to find out the optimal number of spectrum bands allowed to sense is proposed,with the aim of maximizing the expected net reward of each secondary user in each time slot. Finally,extensive simulation results are shown to demonstrate the effectiveness of the proposed dynamic control approaches of spectrum sensing.  相似文献   

16.
Adaptive lossy LZW algorithm for palettised image compression   总被引:4,自引:0,他引:4  
Chiang  S.W. Po  L.M. 《Electronics letters》1997,33(10):852-854
An adaptive lossy LZW algorithm is proposed for palettised image compression, that is a generalised algorithm of the conventional lossless LZW algorithm. The new algorithm employs an adaptive thresholding mechanism with human visual characteristics to constrain the distortion. With this distortion control, the compression efficiency is increased by ~40% for natural colour images, while maintaining good subjective quality on the reconstructed image. In addition, the encoded image file format is compatible with the original GIF decoder  相似文献   

17.
许立 《电子测试》2014,(10):23-25
在众多数字图像的压缩编码技术中,变位压缩编码技术是基于动态规划算法,它可高效解决许多算法无法解决的问题。在用动态规划算法解决实际问题时,把相互关联的重叠子问题只求解一次,把其状态存入一个二维表中,如果有相同或相似的问题可以直接从二维表中取出结果,减少了重复,提高了效率。  相似文献   

18.
Context modeling is an extensively studied paradigm for lossless compression of continuous-tone images. However, without careful algorithm design, high-order Markovian modeling of continuous-tone images is too expensive in both computational time and space to be practical. Furthermore, the exponential growth of the number of modeling states in the order of a Markov model can quickly lead to the problem of context dilution; that is, an image may not have enough samples for good estimates of conditional probabilities associated with the modeling states. New techniques for context modeling of DPCM errors are introduced that can exploit context-dependent DPCM error structures to the benefit of compression. New algorithmic techniques of forming and quantizing modeling contexts are also developed to alleviate the problem of context dilution and reduce both time and space complexities. By innovative formation, quantization, and use of modeling contexts, the proposed lossless image coder has a highly competitive compression performance and yet remains practical.  相似文献   

19.
为了提高图像的压缩比和压缩质量,结合人眼对比度敏感视觉特性和图像变换域频谱特征,该文提出一种自适应量化表的构建方法。并将该表代替JPEG中的量化表,且按照JPEG的编码算法对3幅不同的彩色图像进行了压缩仿真实验验证,同时与JPEG压缩作对比分析。实验结果表明:与JPEG压缩方法相比,在相同的压缩比下,采用自适应量化压缩后,3幅解压彩色图像的SSIM和PSNR值分别平均提高了1.67%和4.96%。表明该文提出的结合人眼视觉特性的自适应量化是一种较好的、有实用价值的量化方法。  相似文献   

20.
为了降低数据存储和传输的成本,对数据进行压缩处理是一种有效的手段。该文针对具有较小均方值特征的整型数据序列提出了一种新的可用于数据无损压缩的位重组标记编码方法。该方法首先对整型数据序列进行位重组处理,以提高部分数据出现的概率;然后根据数据流中局部数据的概率分布特点自适应地选择合适的编码方式对数据流进行编码。运用实际具有较小均方值特征的整型数据序列对该文方法和其它几种无损压缩方法进行了压缩解压测试,并对比分析了各种压缩算法的压缩效果。测试结果表明,新方法可以实现数据的无损压缩与解压,且其压缩效果优于LZW编码,经典的算术编码,通用的WinRAR软件和专业音频数据压缩软件FLAC的压缩效果,具有良好的应用前景。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号