首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
A new on-line universal lossy data compression algorithm is presented. For finite memoryless sources with unknown statistics, its performance asymptotically approaches the fundamental rate distortion limit. The codebook is generated on the fly, and continuously adapted by simple rules. There is no separate codebook training or codebook transmission. Candidate codewords are randomly generated according to an arbitrary and possibly suboptimal distribution. Through a carefully designed “gold washing” or “information-theoretic sieve” mechanism, good codewords and only good codewords are promoted to permanent status with high probability. We also determine the rate at which our algorithm approaches the fundamental limit  相似文献   

2.
For pt.II see ibid., vol.42, p.822-36 (1996). The Gold-washing data compression algorithm is an adaptive vector quantization algorithm with vector dimension n. In this paper, a redundancy problem of the Gold-washing data compression algorithm is considered. It is demonstrated that for any memoryless source with finite alphabet A and generic distribution p and for any R>0, the redundancy of the Gold-washing data compression algorithm with dimension n (defined as the difference between the average performance of the algorithm and the distortion-rate function D(p,R) of p) is upper-bounded by |δR /δD(p,R)|((|A|+2ξ+4 log n)/2n)+σ(logn/n) where δR/δD(p,R) is the partial derivative of D(p,R) with respect to R, |A| is the cardinality of A, and ξ>0 is a parameter used to control the threshold in the Gold-washing algorithm. In connection with the results of Zhang, Yang, and Wei (see ibid., vol.43, no.1, p.71-91, 1997) on the redundancy of lossy source coding, this shows that the Gold-washing algorithm has the optimal convergence rate among all adaptive finite-state vector quantizers  相似文献   

3.
4.
Two universal lossy data compression schemes, one with fixed rate and the other with fixed distortion, are presented, based on the well-known Lempel-Ziv algorithm. In the case of fixed rate R, the universal lossy data compression scheme works as follows: first pick a codebook Bn consisting of all reproduction sequences of length n whose Lempel-Ziv codeword length is ⩽nR, and then use Bn to encode the entire source sequence n-block by n-block. This fixed-rate data compression scheme is universal in the sense that for any stationary, ergodic source or for any individual sequence, the sample distortion performance as n→∞ is given almost surely by the distortion rate function. A similar result is shown in the context of fixed distortion lossy source coding  相似文献   

5.
6.
This paper presents a simple but eifective algorithm to speed up the codebook search in a vector quantization scheme of SAR raw data when a minimum square error(MSE) criterion is used. A considerable reduction in the number of operations is achieved.  相似文献   

7.
A universal algorithm for sequential data compression   总被引:12,自引:0,他引:12  
A universal algorithm for sequential data compression is presented. Its performance is investigated with respect to a nonprobabilistic model of constrained sources. The compression ratio achieved by the proposed universal code uniformly approaches the lower bounds on the compression ratios attainable by block-to-variable codes and variable-to-block codes designed to match a completely specified source.  相似文献   

8.
A new lossy variant of the fixed-database Lempel-Ziv coding algorithm for encoding at a fixed distortion level is proposed, and its asymptotic optimality and universality for memoryless sources (with respect to bounded single-letter distortion measures) is demonstrated: as the database size m increases to infinity, the expected compression ratio approaches the rate-distortion function. The complexity and redundancy characteristics of the algorithm are comparable to those of its lossless counterpart. A heuristic argument suggests that the redundancy is of order (log log m)/log m, and this is also confirmed experimentally; simulation results are presented that agree well with this rate. Also, the complexity of the algorithm is seen to be comparable to that of the corresponding lossless scheme. We show that there is a tradeoff between compression performance and encoding complexity, and we discuss how the relevant parameters can be chosen to balance this tradeoff in practice. We also discuss the performance of the algorithm when applied to sources with memory, and extensions to the cases of unbounded distortion measures and infinite reproduction alphabets  相似文献   

9.
Adaptive lossy LZW algorithm for palettised image compression   总被引:4,自引:0,他引:4  
Chiang  S.W. Po  L.M. 《Electronics letters》1997,33(10):852-854
An adaptive lossy LZW algorithm is proposed for palettised image compression, that is a generalised algorithm of the conventional lossless LZW algorithm. The new algorithm employs an adaptive thresholding mechanism with human visual characteristics to constrain the distortion. With this distortion control, the compression efficiency is increased by ~40% for natural colour images, while maintaining good subjective quality on the reconstructed image. In addition, the encoded image file format is compatible with the original GIF decoder  相似文献   

10.
A universal data compression system   总被引:6,自引:0,他引:6  
A universal data compression algorithm is described which is capable of compressing long strings generated by a "finitely generated" source, with a near optimum per symbol length without prior knowledge of the source. This class of sources may be viewed as a generalization of Markov sources to random fields. Moreover, the algorithm does not require a working storage much larger than that needed to describe the source generating parameters.  相似文献   

11.
A scheme for compressing computer data by treating them as sequences of bytes is described. For each individual sequence to be compressed, a source model of predetermined complexity is built adaptively starting from a memoryless model. An alphabet reduction technique which permits handling of each bit within a byte separately is introduced. Variable-order Markov contexts are generated for each bit within a byte by a process of selective context splitting. The selection of a context for splitting is based on the context's probability as well as the bit entropy under the context. Estimation of bit statistics under the different contexts is made adaptively and encoding is accomplished by an arithmetic code. The scheme allows the complexity of the source model, and thereby the compression performance, to be altered easily. Experiments on typical computer files show that the present scheme, at a moderate complexity level, often outperforms some of the existing schemes  相似文献   

12.
In this article, low-density parity-check (LDPC) codes are applied to lossy source coding and we study how the asymptotic performance of MacKay's (see ibid, vol.45, p.399-431, Mar. 1999 and vol.47, p.2101, July, 2001) LDPC codes depends on the sparsity of the parity-check matrices in the source coding of the binary independent and identically distributed (i.i.d.) source with Pr{x=1}=0.5. In the sequel, it is shown that an LDPC code with column weight O(logn) for code length n can attain the rate-distortion function asymptotically.  相似文献   

13.
Lossy compression schemes are often desirable in many signal processing applications such as the compression of ECG data. This paper presents a relaxation of a provably good algorithm for lossy signal compression, based on the piecewise linear approximation of functions. The algorithm approximates the data within a given tolerance using a piecewise linear function. The paper also describes an architecture suitable for the single-chip implementation of the proposed algorithm. The design consists of control, two multiply/divide units, four adder/subtracter units, and an I/O interface unit. For uniformly sampled data, no division is required, and all operations can be completed in a pipelined manner in at most three cycles per sample point. The corresponding simplified architecture is also presented  相似文献   

14.
因背景更新过程中运动信息不足,造成在处理缓慢移动目标和只有局部运动目标时常常发生误判,为解决上述问题,通过提取运动目标的空间整体信息,提出了一种自适应的码书模型背景更新算法.该方法通过对运动目标空间信息变化进行分析,寻找前景中潜在背景,然后联合像素时域统计信息,得到真正的背景模型.实验结果表明,该算法可以快速适应背景变化,能明显减少对运动信息不足目标的误判,同时保证目标检测的完整性.  相似文献   

15.
This paper presents a contrast-based quantization strategy for use in lossy wavelet image compression that attempts to preserve visual quality at any bit rate. Based on the results of recent psychophysical experiments using near-threshold and suprathreshold wavelet subband quantization distortions presented against natural-image backgrounds, subbands are quantized such that the distortions in the reconstructed image exhibit root-mean-squared contrasts selected based on image, subband, and display characteristics and on a measure of total visual distortion so as to preserve the visual system's ability to integrate edge structure across scale space. Within a single, unified framework, the proposed contrast-based strategy yields images which are competitive in visual quality with results from current visually lossless approaches at high bit rates and which demonstrate improved visual quality over current visually lossy approaches at low bit rates. This strategy operates in the context of both nonembedded and embedded quantization, the latter of which yields a highly scalable codestream which attempts to maintain visual quality at all bit rates; a specific application of the proposed algorithm to JPEG-2000 is presented.  相似文献   

16.
针对现有轨迹数据压缩算法不能准确有效评估关键点的问题,并且算法运行时间较长的缺点,提出另一种对关键点前后特征点进行角度偏移量比较的算法——基于角度偏移量计算的轨迹数据压缩。该算法是基于角度偏移量计算的轨迹数据压缩算法。它的主要原理是利用轨迹数据的凹凸特性来选取特征点并确定关键点,比较关键点与前后特征点连线形成的角度,根据设定的角度阈值对关键点前后的特征点进行取舍。文章算法与经典的道格拉斯—普克算法(DouglasPeucker,,DP)比较,此算法运算时间最短并且能极大地保留空间信息量大的点。与现今流行的启发式空间质量简化算法的改进算法(SQUISH-E)比较,当压缩比相同时,运算时间也最短。  相似文献   

17.
Chen  Chen  Zhang  Limao  Tiong  Robert Lee Kong 《Wireless Networks》2020,26(8):5981-5995
Wireless Networks - Wireless sensor networks (WSNs) generate a variety of continuous data streams. To reduce data storage and transmission cost, compression is recommended to be applied to the data...  相似文献   

18.
19.
This paper presents a fast codebook search method for improving the quantization complexity of full-search vector quantization (VQ). The proposed method is built on the planar Voronoi diagram to label a ripple search domain. Then, the appropriate codeword can easily be found just by searching the local region instead of global exploration. In order to take a step further and obtain the close result full-search VQ would, we equip the proposed method with a duplication mechanism that helps to bring down the possible quantizing distortion to its lowest level. According to the experimental results, the proposed method is indeed capable of providing better outcome at a faster quantization speed than the existing partial-search methods. Moreover, the proposed method only requires a little extra storage for duplication.  相似文献   

20.
We examine the rate-distortion performance and computational complexity of linear transforms for lossy data compression. The goal is to better understand the performance/complexity tradeoffs associated with using the Karhunen-Loeve transform (KLT) and its fast approximations. Since the optimal transform for transform coding is unknown in general, we investigate the performance penalties associated with using the KLT by examining cases where the KLT fails, developing a new transform that corrects the KLT's failures in those examples, and then empirically testing the performance difference between this new transform and the KLT. Experiments demonstrate that while the worst KLT can yield transform coding performance at least 3 dB worse than that of alternative block transforms, the performance penalty associated with using the KLT on real data sets seems to be significantly smaller, giving at most 0.5 dB difference in our experiments. The KLT and its fast variations studied here range in complexity requirements from O(n (2)) to O(n log n) in coding vectors of dimension n. We empirically investigate the rate-distortion performance tradeoffs associated with traversing this range of options. For example, an algorithm with complexity O(n(3/2)) and memory O(n) gives 0.4 dB performance loss relative to the full KLT in our image compression experiments.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号