首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
On entropy-constrained vector quantization using gaussian mixture models   总被引:2,自引:0,他引:2  
A flexible and low-complexity entropy-constrained vector quantizer (ECVQ) scheme based on Gaussian mixture models (GMMs), lattice quantization, and arithmetic coding is presented. The source is assumed to have a probability density function of a GMM. An input vector is first classified to one of the mixture components, and the Karhunen-Lo`eve transform of the selected mixture component is applied to the vector, followed by quantization using a lattice structured codebook. Finally, the scalar elements of the quantized vector are entropy coded sequentially using a specially designed arithmetic coder. The computational complexity of the proposed scheme is low, and independent of the coding rate in both the encoder and the decoder. Therefore, the proposed scheme serves as a lower complexity alternative to the GMM based ECVQ proposed by Gardner, Subramaniam and Rao [1]. The performance of the proposed scheme is analyzed under a high-rate assumption, and quantified for a given GMM. The practical performance of the scheme was evaluated through simulations on both synthetic and speech line spectral frequency (LSF) vectors. For LSF quantization, the proposed scheme has a comparable performance to [1] at rates relevant for speech coding (20-28 bits per vector) with lower computational complexity.  相似文献   

2.
Recently, arithmetic coding has attracted the attention of many scholars because of its high compression capability. Accordingly, in this paper, a method that adds secrecy to this well-known source code is proposed. Finite state arithmetic code is used as source code to add security. Its finite state machine characteristic is exploited to insert some random jumps during source coding process. In addition, a Huffman code is designed for each state to make decoding possible even in jumps. Being prefix-free, Huffman codes are useful in tracking correct states for an authorized user when he/she decodes with correct symmetric pseudo-random key. The robustness of our proposed scheme is further reinforced by adding another extra uncertainty by swapping outputs of Huffman codes in each state. Several test images are used for inspecting the validity of the proposed Huffman finite state arithmetic coding (HFSAC). The results of several experimental key space analyses, statistical analyses, key and plaintext sensitivity tests show that HFSAC with a little effect on compression efficiency provides an efficient and secure method for real-time image encryption and transmission.  相似文献   

3.
The encoding schemes utilize the first- and second-order Markov models to describe the source structure. Two coding techniques, Huffman encoding and arithmetic encoding, are used to achieve a high coding efficiency. Universal code tables which match the statistics of contour line drawings obtained from 64 contour maps are presented and can be applied to encode all contour line drawings with chain code representations. Experiments have shown about a 50% improvement on the code amount over the conventional chain encoding scheme with arithmetic coding schemes, and also have shown a compression rate comparable to that obtained by T. Kaneko and M. Okudaira (1985) with Huffman coding schemes, while this implementation is substantially simpler  相似文献   

4.
Block cyclic redundancy check (CRC) codes are typically used to perform error detection in automatic repeat request (ARQ) protocols for data communications. Although efficient, CRCs can detect errors only after an entire block of data has been received and processed. We propose a new “continuous” error detection scheme using arithmetic coding that provides a novel tradeoff between the amount of added redundancy and the amount of time needed to detect an error once it occurs. This method of error detection, first introduced by Bell, Witten, and Cleary (1990), is achieved through the use of an arithmetic codec, and has the attractive feature that it can be combined physically with arithmetic source coding, which is widely used in state of-the-art image coders. We analytically optimize the tradeoff between added redundancy and error-detection time, achieving significant gains in bit rate throughput over conventional ARQ schemes for binary symmetric channel models for all probabilities of error  相似文献   

5.
该文针对深空通信、移动通信等资源受限网络中的信息有效性、可靠性和安全性传输,提出一种基于混沌密钥控制的联合信源信道与安全算术码编译码算法。该算法在编码端通过混沌映射1控制在算术码内嵌入多个禁用符号,将信道编码检错与密码流的扰乱相结合;同时,通过混沌映射2控制信源符号的算术编码,将信源编码与信息安全相结合,实现了联合信源信道与信息安全编译码。实验结果表明,该算法与现有的同类算法相比,当误包率为10-3时,改善编译码性能0.4 dB,同时增强了可靠性和安全性。  相似文献   

6.
We propose an optimal buffered compression algorithm for shape coding as defined in the forthcoming MPEG-4 international standard. The MPEG-4 shape coding scheme consists of two steps: first, distortion is introduced by down and up scaling; then, context-based arithmetic encoding is applied. Since arithmetic coding is "lossless," the down up scaling step is considered as a virtual quantizer. We first formulate the buffer-constrained adaptive quantization problem for shape coding, and then propose an algorithm for the optimal solution under buffer constraints. Previously, the fact that a conversion ratio (CR) of 1/4 makes a coded image irritating to human observers for QCIF size was reported for MPEG-4 shape coding. Therefore, careful consideration for small size images such as QCIF should be given to prevent coded images from being unacceptable. To this end, a low bit rate tuned algorithm is proposed in this paper as well. Experimental results are given using an MPEG-4 shape codec.  相似文献   

7.
Arithmetic coding in lossless waveform compression   总被引:1,自引:0,他引:1  
A method for applying arithmetic coding to lossless waveform compression is discussed. Arithmetic coding has been used widely in lossless text compression and is known to produce compression ratios that are nearly optimal when the symbol table consists of an ordinary alphabet. In lossless compression of digitized waveform data, however, if each possible sample value is viewed as a “symbol” the symbol table would be typically very large and impractical. the authors therefore define a symbol to be a certain range of possible waveform values, rather than a single value, and develop a coding scheme on this basis. The coding scheme consists of two compression stages. The first stage is lossless linear prediction, which removes coherent components from a digitized waveform and produces a residue sequence that is assumed to have a white spectrum and a Gaussian amplitude distribution. The prediction is lossless in the sense that the original digitized waveform can be recovered by processing the residue sequence. The second stage, which is the subject of the present paper, is arithmetic coding used as just described. A formula for selecting ranges of waveform values is provided. Experiments with seismic and speech waveforms that produce near-optimal results are included  相似文献   

8.
We propose a novel adaptive arithmetic coding method that uses dual symbol sets: a primary symbol set that contains all the symbols that are likely to occur in the near future and a secondary symbol set that contains all other symbols. The simplest implementation of our method assumes that symbols that have appeared in the previously are highly likely to appear in the near future. It therefore fills the primary set with symbols that have occurred in the previously. Symbols move dynamically between the two symbol sets to adapt to the local statistics of the symbol source. The proposed method works well for sources, such as images, that are characterized by large alphabets and alphabet distributions that are skewed and highly nonstationary. We analyze the performance of the proposed method and compare it to other arithmetic coding methods, both theoretically and experimentally. We show experimentally that in certain contexts, e.g., with a wavelet-based image coding scheme that has appeared in the literature, the compression performance of the proposed method is better than that of the conventional arithmetic coding method and the zero-frequency escape arithmetic coding method.  相似文献   

9.
In this paper, an innovative joint-source channel coding scheme is presented. The proposed approach enables iterative soft decoding of arithmetic codes by means of a soft-in soft- out decoder based on suboptimal search and pruning of a binary tree. An error-resilient arithmetic coder with a forbidden symbol is used in order to improve the performance of the joint source/channel scheme. The performance in the case of transmission across the AWGN channel is evaluated in terms of word error probability and compared to a traditional separated approach. The interleaver gain, the convergence property of the system, and the optimal source/channel rate allocation are investigated. Finally, the practical relevance of the proposed joint decoding approach is demonstrated within the JPEG 2000 coding standard. In particular, an iterative channel and JPEG 2000 decoder is designed and tested in the case of image transmission across the AWGN channel.  相似文献   

10.
In this paper, a new still image coding scheme is presented. In contrast with standard tandem coding schemes, where the redundancy is introduced after source coding, it is introduced before source coding using real BCH codes. A joint channel model is first presented. The model corresponds to a memoryless mixture of Gaussian and Bernoulli-Gaussian noise. It may represent the source coder, the channel coder, the physical channel, and their corresponding decoder. Decoding algorithms are derived from this channel model and compared to a state-of-art real BCH decoding scheme. A further comparison with two reference tandem coding schemes and the proposed joint coding scheme for the robust transmission of still images has been presented. When the tandem scheme is not accurately tuned, the joint coding scheme outperforms the tandem scheme in all situations. Compared to a tandem scheme well tuned for a given channel situation, the joint coding scheme shows an increased robustness as the channel conditions worsen. The soft performance degradation observed when the channel worsens gives an additional advantage to the joint source-channel coding scheme for fading channels, since a reconstruction with moderate quality may be still possible, even if the channel is in a deep fade.  相似文献   

11.
JPEG2000 is a recently standardized image compression algorithm. The heart of this algorithm is the coding scheme known as embedded block coding with optimal truncation (EBCOT). This contributes the majority of processing time to the compression algorithm. The EBCOT scheme consists of a bit-plane coder coupled to a MQ arithmetic coder. Recent bit-plane coder architectures are capable of producing symbols at a higher rate than the existing MQ arithmetic coders can absorb. Thus, there is a requirement for a high throughput MQ arithmetic coder. We examine the existing MQ arithmetic coder architectures and develop novel techniques capable of absorbing the high symbol rate from high performance bit-plane coders, as well as providing flexible design choices.  相似文献   

12.
杨胜天 《通信学报》2004,25(3):119-125
通过使用查表、概率量化和修改的重正化操作的方法,给出了一个简单的无乘法的二值分组算术编码方案。实验结果显示其平均码字长度比常用的快速编码器更接近于源编码理论的理论下限。  相似文献   

13.
Several compression techniques need to be integrated for the achievement of effective low-bit-rate coding of moving images. Image entropy codes are used in conjunction with either predictive or transform coding methods. In this paper, we investigate the possible advantages of using arithmetric codes for image entropy coding. A theory of source modeling is established based on the concept of source parsing and conditioning trees. The key information-theoretic properties of conditioning trees are discussed along with algorithms for the construction of optimal and suboptimal trees. The theory and algorithms are then applied to evaluating the performance of entropy coding for the discrete cosine transform coefficients of digital images from the "Walter Cronkite" video sequence. The performance of arithmetic codes is compared to that of a traditional combination of run length and Huffman codes. The results indicate that binary arithmetic codes outperform run length codes by a factor of 55 percent for low-rate coding of the zero-valued coefficients. Hexadecimal arithmetic codes provide a coding rate improvement as high as 28 percent over truncated Huffman codes for the nonzero coefficients. The complexity of these arithmetic codes is suitable for practical implementation.  相似文献   

14.
Arithmetic coding algorithm with embedded channel coding   总被引:2,自引:0,他引:2  
Elmasry  G.F. 《Electronics letters》1997,33(20):1687-1688
A joint lossless source and channel coding approach that incorporates error detection and correction capabilities in arithmetic coding is exploited. The encoded binary data representation allows the source decoder to recover the source symbols, even with channel errors. The self-synchronisation property of arithmetic coding, the knowledge of the source statistics, and some added redundancy are used for error detection and correction  相似文献   

15.
基于IFS分形理论的信源编码技术的研究   总被引:5,自引:1,他引:5  
本文首次系统地研究了基于IFS分形理论的信源压缩编码与传统方法的关系。  相似文献   

16.
Mutual information-based analysis of JPEG2000 contexts.   总被引:4,自引:0,他引:4  
Context-based arithmetic coding has been widely adopted in image and video compression and is a key component of the new JPEG2000 image compression standard. In this paper, the contexts used in JPEG2000 are analyzed using the mutual information, which is closely related to the compression performance. We first show that, when combining the contexts, the mutual information between the contexts and the encoded data will decrease unless the conditional probability distributions of the combined contexts are the same. Given I, the initial number of contexts, and F, the final desired number of contexts, there are S(I, F) possible context classification schemes where S(I, F) is called the Stirling number of the second kind. The optimal classification scheme is the one that gives the maximum mutual information. Instead of using an exhaustive search, the optimal classification scheme can be obtained through a modified generalized Lloyd algorithm with the relative entropy as the distortion metric. For binary arithmetic coding, the search complexity can be reduced by using dynamic programming. Our experimental results show that the JPEG2000 contexts capture the correlations among the wavelet coefficients very well. At the same time, the number of contexts used as part of the standard can be reduced without loss in the coding performance.  相似文献   

17.
算术编码是一种无失真的编码方法,能有效地压缩信源冗余度,使编成的码率趋于信源的熵。该编码的卓越性能使其在多媒体领域得到了越来越广泛的应用:H.263,JBIG以及最新的图像压缩标准JPEG2000中均采用它作为编码算法。文中阐述了算术编码的基本算法和一种修正算法,并且给出了这两种算法的具体应用实例,分析了该编码的优缺点以及未来的应用前景。  相似文献   

18.
In this paper, we propose an entropy minimization histogram mergence (EMHM) scheme that can significantly reduce the number of grayscales with nonzero pixel populations (GSNPP) without visible loss to image quality. We proved in theory that the entropy of an image is reduced after histogram mergence and that the reduction in entropy is maximized using our EMHM. The reduction in image entropy is good for entropy encoding considering that the minimum average code word length per source symbol is the entropy of the source signal according to Shannon’s first theorem. Extensive experimental results show that our EMHM can significantly reduce the code length of entropy coding, such as Huffman, Shannon, and arithmetic coding, by over 20% while preserving the image subjective and objective quality very well. Moreover, the performance of some classic lossy image compression techniques, such as the Joint Photographic Experts Group (JPEG), JPEG2000, and Better Portable Graphics (BPG), can be improved by preprocessing images using our EMHM.  相似文献   

19.
Video transmission over the wireless or wired network requires error-resilient mechanism since compressed video bitstreams are sensitive to transmission errors because of the use of predictive coding and variable length coding. This paper investigates the performance of a simple and low complexity error-resilient coding scheme which combines source and channel coding to protect compressed bitstream of wavelet-based Dirac video codec in the packet-erasure channel. By partitioning the wavelet transform coefficients of the motion-compensated residual frame into groups and independently processing each group using arithmetic and forward error correction (FEC) coding, Dirac could achieves the robustness to transmission errors by giving the video quality which is gracefully decreasing over a range of packet loss rates up to 30% when compared with conventional FEC only methods. Simulation results also show that the proposed scheme using multiple partitions can achieve up to 10 dB PSNR gain over its existing un-partitioned format. This paper also investigates the error-resilient performance of the proposed scheme in comparison with H.264 over packet-erasure channel.  相似文献   

20.
刘军清  李天昊 《通信学报》2007,28(9):112-118
对信源信道自适应联合编码方法进行了研究,提出了一种新的基于纠错算术码的联合信源信道编解码系统。该系统在编码端利用算术码内嵌禁用符号实现信源信道一体式编码,即利用马尔科夫信源模型和根据信道状态信息自适应地调整禁用符号概率大小从而调整编码码率来实现信道自适应;在解码端,推导出了基于MAP的解码测度数学公式并基于此测度公式提出了一种改进的堆栈序列估计算法。与传统的信道自适应编码算法不同,该自适应编码算法只需调整一个参数:禁用符号,且理论上可获得连续可变的编码码率。实验结果表明,与经典的Grangetto联合编码系统以及分离编码系统相比,所提出的编码系统具有明显改善的性能增益。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号