共查询到18条相似文献,搜索用时 156 毫秒
1.
自适应字典压缩算法中的误码传播分析 总被引:1,自引:0,他引:1
在分析LZ77算法编译码原理的基础上,讨论了输入误码对译码字典和解压数据的影响,研究了误码传播问题。指出这些工作为消除误码传播、保证数据完整性具有重要意义。 相似文献
2.
无损数据压缩系统在通信传输过程中容易出现错误,会导致码表和重构数据出错并引发误码扩散,影响其在文件系统和无线通信中的应用。针对在通用编码领域广泛使用的无损数据压缩算法LZW,该文分析并利用LZW压缩数据的冗余,通过选取部分编码码字并动态调整其对应的被压缩符号串的长度来携带校验码,提出了具有误码纠正能力的无损数据压缩方法CLZW。该方法不用额外添加数据,也不改变数据规格和编码规则,与标准LZW算法兼容。实验结果表明,用该方法压缩的文件仍然能用标准LZW解码器解压,且该方法可以对LZW压缩数据的误码进行有效纠正。 相似文献
3.
4.
丁思淼 《电子技术与软件工程》2023,(4):194-197
本文基于OOXML协议文档的多重约束条件提出了一种文本数据容错恢复算法。首先,依据协议先验信息去除文档冗余数据,缩小算法运算范围的同时可以减少冗余数据对恢复算法的影响,提高算法的准确度和运算速度。其次,利用传输信道的信噪比确定信道误码率,设置误码子集和错误图样集合,通过inflate算法解压后提取文本数据并获取其特征信息。最后,遍历错误图样集合找出最佳纠错元素后将其嵌入到原始数据流位置。实验结果表明,针对网络传输过程中噪声误码造成的OOXML协议文本数据出现差错导致用户无法获取文档信息获取这一问题,该算法可以有效提升文本数据的差错恢复率。 相似文献
5.
为了选择适合水声通信数据无损压缩的算法,对哈夫曼压缩算法和LZ77压缩算法进行了对比研究。通过C语言编程实现两种算法的压缩,并利用水声通信数据获得压缩结果。对两种算法的压缩率和压缩效率对比分析之后,得出结论:对于水声信号,使用哈夫曼算法将获得更好的压缩率和压缩速率。尤其是哈夫曼算法的压缩速率远远优于LZ77算法。 相似文献
6.
7.
本文对气象雷达原始回波文本类型数据进行无损压缩算法研究,根据文本类型数据的特点,设计了位图压缩 半字节压缩 双Huffman压缩的混合压缩算法。采用设计的混合压缩算法对实测数据在C环境下进行压缩和解压实验,结果表明:混合压缩算法的压缩性能优于通用压缩软件WinRar、WinZip的压缩性能。 相似文献
8.
本文介绍了LZW的压缩和解压算法,采用c语言实现了一个基于LZW的简单的文本压缩解压器,可以实现文本文件内容的压缩和解压,具有很好的效果。 相似文献
9.
信道编码分析是对编码参数逆向分析,在智能通信和信息截获领域具有重要作用。针对误码条件下采用随机交织的码字难以识别分析的问题,本文提出了一种基于搜索小重量向量的交织及编码类型识别算法。首先,随机选取部分码字并变换至对偶矩阵,再利用小重量向量搜索算法进行搜索,筛选剔除后得到部分有效校验向量;然后,根据LDPC译码原理,对码字进行类似译码并与前面步骤进行迭代,得到绝大部分校验向量;最后,统计校验向量的平均跨度以及离散度,判断交织存在性以及编码类型。本文算法克服了现有方法无法适用于随机交织码字的局限性。仿真实验以1/2码率卷积码和(15,11)汉明码为主,在误比特率为0.006时,本文算法对随机交织码字仍能够有效识别。 相似文献
10.
《无线电工程》2017,(12):24-29
为了提升传统交替方向乘子法(Alternating Direction Method of Multipliers,ADMM)惩罚译码算法的误码性能和收敛速率,对罚函数做进一步优化,提出一种基于三段式罚函数的ADMM惩罚译码算法(Three-Segment ADMM-Penalized Decoding,TS-ADMM-PD)。通过引入过渡函数,其斜率介于前后两段函数之间,达到折衷译码性能和收敛速度的目的。优化后的罚函数在x=0和x=1附近斜率较大,以增强对伪码字的惩罚力度,降低误码率并加快译码收敛;在x=0.5附近斜率较小,以便此处的变量节点信息进行传递。仿真结果表明,相比于I-ADMM-PD算法,所提的TS-ADMM-PD算法有着更优秀的误码性能和收敛速度。 相似文献
11.
Better OPM/L Text Compression 总被引:1,自引:0,他引:1
An OPM/L data compression scheme suggested by Ziv and Lempel, LZ77, is applied to text compression. A slightly modified version suggested by Storer and Szymanski, LZSS, is found to achieve compression ratios as good as most existing schemes for a wide range of texts. LZSS decoding is very fast, and comparatively little memory is required for encoding and decoding. Although the time complexity of LZ77 and LZSS encoding isO(M) for a text ofM characters, straightforward implementations are very slow. The time consuming step of these algorithms is a search for the longest string match. Here a binary search tree is used to find the longest string match, and experiments show that this results in a dramatic increase in encoding speed. The binary tree algorithm can be used to speed up other OPM/L schemes, and other applications where a longest string match is required. Although the LZSS scheme imposes a limit on the length of a match, the binary tree algorithm will work without any limit. 相似文献
12.
En-Hui Yang Kieffer J.C. 《IEEE transactions on information theory / Professional Technical Group on Information Theory》2000,46(3):755-777
A grammar transform is a transformation that converts any data sequence to be compressed into a grammar from which the original data sequence can be fully reconstructed. In a grammar-based code, a data sequence is first converted into a grammar by a grammar transform and then losslessly encoded. In this paper, a greedy grammar transform is first presented; this grammar transform constructs sequentially a sequence of irreducible grammars from which the original data sequence can be recovered incrementally. Based on this grammar transform, three universal lossless data compression algorithms, a sequential algorithm, an improved sequential algorithm, and a hierarchical algorithm, are then developed. These algorithms combine the power of arithmetic coding with that of string matching. It is shown that these algorithms are all universal in the sense that they can achieve asymptotically the entropy rate of any stationary, ergodic source. Moreover, it is proved that their worst case redundancies among all individual sequences of length n are upper-bounded by c log log n/log n, where c is a constant. Simulation results show that the proposed algorithms outperform the Unix Compress and Gzip algorithms, which are based on LZ78 and LZ77, respectively 相似文献
13.
Byung S Kim Sun K Yoo Moon H Lee 《IEEE transactions on information technology in biomedicine》2006,10(1):77-83
The delay performance of compression algorithms is particularly important when time-critical data transmission is required. In this paper, we propose a wavelet-based electrocardiogram (ECG) compression algorithm with a low delay property for instantaneous, continuous ECG transmission suitable for telecardiology applications over a wireless network. The proposed algorithm reduces the frame size as much as possible to achieve a low delay, while maintaining reconstructed signal quality. To attain both low delay and high quality, it employs waveform partitioning, adaptive frame size adjustment, wavelet compression, flexible bit allocation, and header compression. The performances of the proposed algorithm in terms of reconstructed signal quality, processing delay, and error resilience were evaluated using the Massachusetts Institute of Technology University and Beth Israel Hospital (MIT-BIH) and Creighton University Ventricular Tachyarrhythmia (CU) databases and a code division multiple access-based simulation model with mobile channel noise. 相似文献
14.
15.
16.
17.
针对范德蒙阵列纠删码算法,介绍了纠删码编译码过程,重点论述了范德蒙码编码算法和译码算法,提出了适合在嵌入式系统实现时的快速算法;在Matlab软件中构建了数字卫星广播系统(DVB-S)模型并进行了信道误码分布仿真,获得了DVB-S系统的误码分布,分析了在系统中使用范德蒙纠删码的可行性;提出了纠删码与系统中的纠错码级联使用模型,并对算法的纠错性能进行仿真,仿真结果表明级联模型能大大提高无线传输系统的可靠性。 相似文献