首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 36 毫秒
1.
Involution codes: with application to DNA coded languages   总被引:1,自引:0,他引:1  
For an involution θ : Σ* → Σ* over a finite alphabet Σ we consider involution codes: θ-infix, θ-comma-free, θ-k -codes and θ-subword-k-codes. These codes arise from questions on DNA strand design. We investigate conditions under which both X and X+ are same type of involution codes. General methods for generating such involution codes are given. The information capacity of these codes show to be optimized in most cases. A specific set of these codes was chosen for experimental testing and the results of these experiments are presented.  相似文献   

2.
This paper presents a new lossless raster font compression method that uses vertex chain code to define character’s outline. Obtained chain codes are compressed by Huffman coding algorithm. The results show that the new method requires least memory space to store the raster fonts among the known methods. Moreover, the font size has almost no impact on the coder efficiency. Due to the low complexity of the decoder that occupies only 2.7 kB of memory space, this method is ideal for use in embedded systems.  相似文献   

3.
Huffman algorithm allows for constructing optimal prefix‐codes with O(n·logn) complexity. As the number of symbols ngrows, so does the complexity of building the code‐words. In this paper, a new algorithm and implementation are proposed that achieve nearly optimal coding without sorting the probabilities or building a tree of codes. The complexity is proportional to the maximum code length, making the algorithm especially attractive for large alphabets. The focus is put on achieving almost optimal coding with a fast implementation, suitable for real‐time compression of large volumes of data. A practical case example about checkpoint files compression is presented, providing encouraging results. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

4.
Multiresolution meshes provide an efficient and structured representation of geometric objects. To increase the mesh resolution only at vital parts of the object, adaptive refinement is widely used. We propose a lossless compression scheme for these adaptive structures that exploits the parent–child relationships inherent to the mesh hierarchy. We use the rules that correspond to the adaptive refinement scheme and store bits only where some freedom of choice is left, leading to compact codes that are free of redundancy. Moreover, we extend the coder to sequences of meshes with varying refinement. The connectivity compression ratio of our method exceeds that of state‐of‐the‐art coders by a factor of 2–7. For efficient compression of vertex positions we adapt popular wavelet‐based coding schemes to the adaptive triangular and quadrangular cases to demonstrate the compatibility with our method. Akin to state‐of‐the‐art coders, we use a zerotree to encode the resulting coefficients. Using improved context modelling we enhanced the zerotree compression, cutting the overall geometry data rate by 7% below those of the successful Progressive Geometry Compression. More importantly, by exploiting the existing refinement structure we achieve compression factors that are four times greater than those of coders which can handle irregular meshes.  相似文献   

5.
Jürgen Abel 《Software》2010,40(9):751-777
The lossless Burrows–Wheeler compression algorithm has received considerable attention over recent years for both its simplicity and effectiveness. It is based on a permutation of the input sequence—the Burrows–Wheeler transformation (BWT)—which groups symbols with a similar context close together. In the original version, this permutation was followed by a Move‐To‐Front transformation and a final entropy coding stage. Later versions used different algorithms, placed after the BWT, since the following stages have a significant influence on the compression rate. This paper describes different algorithms and improvements for these post BWT stages including a new context‐based approach. The results for compression rates are presented together with compression and decompression times on the Calgary corpus, the Canterbury corpus, the large Canterbury corpus and the Lukas 2D 16‐bit medical image corpus. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

6.
TONG LAI YU 《Software》1996,26(11):1181-1195
This paper presents the design and implementation of a data compression scheme that can be used for PC software distribution. The method utilizes a lazy parsing strategy and a large sliding-window to obtain good compression ratio. A large window is used to read in characters from a file and a suffix tree is constructed to search for the longest matching substring. Lazy parsing is used to improve the compression performance moderately. Modified unary codes and Huffman codes are used to encode the displacements, copy-lengths and copied symbols. Although the encoder is complex, the expansion phase of such a coder is simple and works very fast; experimental results confirm this fact. Such a compression scheme is most appropriate to be used for PC software distribution.  相似文献   

7.
The paper presents a new lossless ECG compression scheme. The short-term predictor and the coder use conditioning on a small number of contexts. The long-term prediction is based on an algorithm for R-R interval estimation. Several QRS detection algorithms are investigated to select a low complexity and reliable detection algorithm. The coding of prediction residuals uses primarily the Golomb-Rice (GR) codes, but, to improve the coding results, escape codes GR-ESC are used in some contexts for a limited number of samples. Experimental results indicate the good overall performance of the lossless ECG compression algorithms (reducing the storage needs from 12 to about 3-4 bits per sample). The scheme consistently outperforms other waveform or general purpose coding algorithms.  相似文献   

8.
The conventional polyphase pulse compression codes including Frank code, P1, P2, P3, and P4 code suffer severe signal loss in performance under Doppler environment. This paper proposed a new polyphase pulse compression codes which are conceptually derived from the step approximation of the phase curve of the hyperbolic frequency modulated chirp signal. Comparing with the above conventional codes and the sidelobe-optimized polyphase P(n,k) code, the peak value of this new polyphase codes degrades much slower and the range solution as well as maximum sidelobe level are almost constant when Doppler frequency increases. The main disadvantage of this polyphase code is the relatively high sidelobe level without Doppler effect, which can be addressed by applying the proper window function. The desired Doppler-tolerant property of this new polyphase codes is very attractive for radars employing digital signal processing.  相似文献   

9.
We present a generic framework for compression of densely sampled three‐dimensional (3D) surfaces in order to satisfy the increasing demand for storing large amounts of 3D content. We decompose a given surface into patches that are parameterized as elevation maps over planar domains and resampled on regular grids. The resulting shaped images are encoded using a state‐of‐the‐art wavelet image coder. We show that our method is not only applicable to mesh‐ and point‐based geometry, but also outperforms current surface encoders for both primitives.  相似文献   

10.
Reed–Solomon coding is a method for generating arbitrary amounts of erasure correction information from original data via matrix–vector multiplication in finite fields. Previous work has shown that modern CPUs are not well‐matched to this type of computation, requiring applications that depend on Reed–Solomon coding at high speeds (such as high‐performance storage arrays) to use hardware implementations. This work demonstrates that high performance is possible with current cost‐effective graphics processing units across a wide range of operating conditions and describes how performance will likely evolve in similar architectures. It describes the characteristics of the graphics processing unit architecture that enable high‐speed Reed–Solomon coding. A high‐performance practical library, Gibraltar, has been prototyped that performs Reed–Solomon coding on graphics processors in a manner suitable for storage arrays, along with applications with similar data resiliency needs. This library enables variably resilient erasure correcting codes to be used in a broad range of applications. Its performance is compared with that of a widely available CPU implementation, and a rationale for its API is presented. Its practicality is demonstrated through a usage example. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

11.
提出了新一类的变-变长度压缩码,称之为状态翻转连续长度码。该文在测试序列中直接编码连续的“0”和“1”的长度,压缩一个预先计算的测试集,无需像其它文章中受限制仅仅编码连续的“0”,又解决了交替-连续长度码中对两个相邻的连续序列进行编码时必须附加一位的问题。该方法的解压结构是一个简单的有限状态机,不需要一个与扫描链等长的循环扫描移位寄存器。实验结果表明,这种编码能够有效地压缩测试数据。  相似文献   

12.
网格拓扑压缩方法是计算机图形学的基础算法。该文方法是单分辨率,主要针对非三角网格模型的拓扑信息作无损压缩。算法首先遍历网格的所有多边形得到操作系列;然后对操作系列作霍夫曼编码;再对霍夫曼编码结果作基于上下文长度可变的算术编码得到最后的压缩结果。相比于对非三角网格拓扑信息作压缩的压缩比很高的算法,该算法得到的压缩结果更好。此算法的另一个突出优点是在解码时间和空间上有了改进——新算法可以在接收一个多边形的编码后立即完成解码并抛弃这个编码,从而使得该算法特别适用于在线传输和解码的实时与交互应用场合。此外,该算法还可以处理有空洞和柄(handle)的模型。  相似文献   

13.
A method for compressing large binary images is proposed for applications where spatial access to the image is required. The proposed method is a two‐stage combination of forward‐adaptive modeling and backward‐adaptive context based compression with re‐initialization of statistics. The method improves compression performance significantly in comparison to a straightforward combination of JBIG and tiling. Only minor modifications to the QM‐coder are required, and therefore existing software implementations can be easily utilized. Technical details of the modifications are provided. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

14.
Semistatic byte‐oriented word‐based compression codes have been shown to be an attractive alternative to compress natural language text databases, because of the combination of speed, effectiveness, and direct searchability they offer. In particular, our recently proposed family of dense compression codes has been shown to be superior to the more traditional byte‐oriented word‐based Huffman codes in most aspects. In this paper, we focus on the problem of transmitting texts among peers that do not share the vocabulary. This is the typical scenario for adaptive compression methods. We design adaptive variants of our semistatic dense codes, showing that they are much simpler and faster than dynamic Huffman codes and reach almost the same compression effectiveness. We show that our variants have a very compelling trade‐off between compression/decompression speed, compression ratio, and search speed compared with most of the state‐of‐the‐art general compressors. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

15.
This paper describes the results of a general theory of matrix codes correcting a set of given types of multiple errors. A detailed study has been made of certain matrix classes of these systematic binary error correcting codes that will correct typical errors of some digital channels. These codes published by Elias,(2,3) Hobb's,(5) and Voukalis(11) account for this theory and other new families of binary systematic matrix codes of arbitrary size, correcting random, burst and clusters of errors are given here. Also presented here are the basic ideas of each of these codes. We can easily find practical decoding algorithms for each of these codes. The characteristic calculation of the parity check equations that the information matrix codebook has to satisfy are also shown. Further on we deal with the optimum construction of these codes showing their use in certain applications. We answer questions such as: “What is the optimum size of the code?” “What is the best structure of the code?” “What is the probability of error correction and the mean error correction performance?” Consequently, in this paper we also describe the results of an extensive search for optimum matrix codes designed to correct a given set of multiple errors as well as their implementation.  相似文献   

16.
The Alliez Desbrun (AD) coder has accomplished the best compression ratios for multiresolution 2-manifold meshes in the last decade. This paper presents a Bayesian AD coder which has better compression ratios in connectivity coding than the original coder, based on a mesh-aware valence coding scheme for multiresolution meshes. In contrast to the original AD coder, which directly encodes a valence for each decimated vertex, our coder indirectly encodes the valence according to its rank in a sorted list with respect to the mesh-aware scores of the possible valences. Experimental results show that the Bayesian AD coder shows an improvement of 8.5-36.2% in connectivity coding compared to the original AD coder despite of the fact that a simple coarse-to-fine step of the mesh-aware valence coding is plugged into the original algorithm.  相似文献   

17.
本文介绍了流星余迹信道及其特点、信道编码技术在流星余迹通信系统中的应用.通过分析说明低密度校验(LDPC)码在流星余迹通信系统中的应用,提出了LDPC码应用的几个原则.研究表明,在发射功率(信噪比)有限的情况下,流星余迹通信系统可以利用LDPC码的同时纠错、检错特性,使系统达到高的正确接收率.最后针对流星突发信道的信道参数变化的特点,设计了非规则的短长LPDC码,可以得到比规则LDPC码更加优越的性能.  相似文献   

18.
This paper discusses the optimal coding of uniformly quantized Laplacian sources. The techniques known for designing optimal codes for sources with infinite alphabets are used for the quantized Laplacian sources which have probability mass functions with two geometrically decaying tails. Due to the simple parametric model of the source distribution the Huffman iterations are possible to be carried on analytically, using the concept of reduced source, and the final codes are obtained as a sequence of very simple arithmetic operations, avoiding the need to store coding tables. Comparing three uniform quantizers, we find one which consistently outperforms the others in the rate-distortion sense. We foresee for the newly introduced codes an important area of applications in low complexity lossy image coding, since similar codes, designed for two-sided geometrical sources, became the basic tools used in JPEG-LS lossless image compression.  相似文献   

19.
An improvement to a method of extracting line and region information from technical drawings is presented. A scheme for compressing this information as it is being found is also given. The encoding scheme is based on a combination of run-length codes, direction chain codes, and Huffman codes. It is efficient, requires minimal storage, and achieves favorable compression ratios. Experimental results from four typical line drawings are presented.  相似文献   

20.
Efficient Lossless Image Contour Coding   总被引:1,自引:0,他引:1  
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号