首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Discusses various aspects of transform coding, including: source coding, constrained source coding, the standard theoretical model for transform coding, entropy codes, Huffman codes, quantizers, uniform quantization, bit allocation, optimal transforms, transforms visualization, partition cell shapes, autoregressive sources, transform optimization, synthesis transform optimization, orthogonality and independence, and departures form the standard model  相似文献   

2.
A bursty multiple-access communication channel with constrained total system bandwidth, total average power, and message error rates is considered. A stochastic model for the number of active transmitters is developed. Four schemes for the dynamic assignment of power and coding rate to active transmitters are considered and compared under the expected burst system time criterion. Necessary and sufficient conditions for system operation are given, and all schemes are shown to have the same saturation behavior. Adaptive coding rates are shown to enjoy substantial advantages over fixed coding; adaptive power assignment does not offer advantages over fixed power assignment.  相似文献   

3.
曹阳  任发韬  彭小峰  张勋  陈果 《红外与激光工程》2018,47(11):1122003-1122003(7)
LT码是一种无码率的线性分组码,具有很强的信道适应性和低复杂度。文中提出一种CRC码与LT码级联的编码方案(CRC-LT码),CRC-LT码通过调整译码开销来保证FSO系统的可通率和使LT码在FSO系统中获得的编码增益。通过建立分析模型,推导出在给定信道条件下的数据恢复率。最后,利用Gamma-Gamma信道模型进行CRC-LT码的编译码过程仿真,并给出数据恢复率与信道条件、信噪比和译码开销的关系,以及FSO系统的误码率。仿真结果表明,CRC-LT码在牺牲一定译码开销后能够获得更高的编码增益,有效保证FSO系统的可通率。  相似文献   

4.
The concept of adapted waveform analysis using a best-basis selection out of a predefined library of wavelet packet (WP) bases allows an efficient image representation for the purpose of compression. Image coding methods based on the best-basis WP representation have shown significant coding gains for some image classes compared with methods using a fixed dyadic structured wavelet basis, at the expense however, of considerably higher computational complexity. A modification of the best-basis method, the so-called complexity constrained best-basis algorithm (CCBB), is proposed which parameterises the complexity gap between the fast (standard) wavelet transform and the best wavelet packet basis of a maximal WP library. This new approach allows a `suboptimal' best basis to be found with respect to a given budget of computational complexity or, in other words, it offers an instrument to control the trade-off between compression speed and, coding efficiency. Experimental results are presented for image coding applications showing a highly nonlinear relationship between the rate-distortion performance and the computational complexity in such a way that a relatively small increase in complexity with respect to the standard wavelet basis results in a relatively high rate distortion gain  相似文献   

5.
A universal representation for the perceptual weighted zerotree coding algorithm is developed, in which the perceptual weighted zerotree coding is decomposed into two separate parts, i.e. visual weighting and zerotree representation, which can be realised independently. Prior to zerotree processing, the extracted full-tree is weighted by using a visual weighting matrix. Any zerotree algorithm like EZW, SPIHT and zerotree space-frequency quantisation can be used to encode the weighted coefficients of the wavelet transform. In other words, any previous algorithm without perceptual weighting can be easily extended to form a new perceptual coder using the proposed framework. Several examples of visual weighting matrices are given to show the effect of the new method  相似文献   

6.
In this paper, we present the design of directional lapped transforms for image coding. A lapped transform, which can be implemented by a prefilter followed by a discrete cosine transform (DCT), can be factorized into elementary operators. The corresponding directional lapped transform is generated by applying each elementary operator along a given direction. The proposed directional lapped transforms are not only nonredundant and perfectly reconstructed, but they can also provide a basis along an arbitrary direction. These properties, along with the advantages of lapped transforms, make the proposed transforms appealing for image coding. A block-based directional transform scheme is also presented and integrated into HD Phtoto, one of the state-of-the-art image coding systems, to verify the effectiveness of the proposed transforms.  相似文献   

7.
New methods are presented to protect maximum runlength-limited sequences against random and burst errors and to avoid error propagation. The methods employ parallel conversion techniques and enumerative coding algorithms that transform binary user information into constrained codewords. The new schemes have a low complexity and are very efficient. The approach can be used for modulation coding in recording systems and for synchronization and line coding in communication systems. The schemes enable the usage of high-rate constrained codes, as error control can be provided with similar capabilities as for unconstrained sequences  相似文献   

8.
Transform coding with integer-to-integer transforms   总被引:4,自引:0,他引:4  
A new interpretation of transform coding is developed that downplays quantization and emphasizes entropy coding, allowing a comparison of entropy coding methods with different memory requirements. With conventional transform coding, based on computing Karhunen-Loeve transform coefficients and then quantizing them, vector entropy coding can be replaced by scalar entropy coding without an increase in rate. Thus the transform coding advantage is a reduction in memory requirements for entropy coding. This paper develops a transform coding technique where the source samples are first scalar-quantized and then transformed with an integer-to-integer approximation to a nonorthogonal linear transform. Among the possible advantages is to reduce the memory requirement further than conventional transform coding by using a single common scalar entropy codebook for all components. The analysis shows that for high-rate coding of a Gaussian source, this reduction in memory requirements comes without any degradation of rate-distortion performance  相似文献   

9.
A common theory of lapped orthogonal transforms (LOTs) and critically sampled filter banks, called L into N coding (LINC), is presented. The theory includes a unified analysis of both coding methods and identity relations between the transform, inverse transform, analysis filter bank, and synthesis filter bank. A design procedure for LINC analysis/synthesis systems, which satisfy the conditions for perfect reconstruction, is developed. The common LINC theory is used to define an ideal LINC system which is used, together with the power spectral density of the input signal, to calculate theoretical bounds for the coding gain. A generalized overlapping block transform (OBT) with time domain aliasing cancellation (TDAC) is used to approximate the ideal LINC. A generalization of the OBT includes multiple block overlap and additional windowing. A recursive design procedure for windows of arbitrary lengths is presented. The coding gain of the generalized OBT is higher than that of the Karhunen-Loeve transform (KLT) and close to the theoretical bounds for LINC. In the case of image coding, the generalized OBT reduces the blocking effects when compared with the DCT  相似文献   

10.
We address the well-known problem of determining the capacity of constrained coding systems. While the one-dimensional case is well understood to the extent that there are techniques for rigorously deriving the exact capacity, in contrast, computing the exact capacity of a two-dimensional constrained coding system is still an elusive research challenge. The only known exception in the two-dimensional case is an exact (however, not rigorous) solution to the -run-length limited (RLL) system on the hexagonal lattice. Furthermore, only exponential-time algorithms are known for the related problem of counting the exact number of constrained two-dimensional information arrays. We present the first known rigorous technique that yields an exact capacity of a two-dimensional constrained coding system. In addition, we devise an efficient (polynomial time) algorithm for counting the exact number of constrained arrays of any given size. Our approach is a composition of a number of ideas and techniques: describing the capacity problem as a solution to a counting problem in networks of relations, graph-theoretic tools originally developed in the field of statistical mechanics, techniques for efficiently simulating quantum circuits, as well as ideas from the theory related to the spectral distribution of Toeplitz matrices. Using our technique, we derive a closed-form solution to the capacity related to the Path-Cover constraint in a two-dimensional triangular array (the resulting calculated capacity is ). Path-Cover is a generalization of the well known one-dimensional -RLL constraint for which the capacity is known to be .  相似文献   

11.
In this paper, we establish a probabilistic framework for adaptive transform coding that leads to a generalized Lloyd type algorithm for transform coder design. Transform coders are often constructed by concatenating an ad hoc choice of transform with suboptimal bit allocation and quantizer design. Instead, we start from a probabilistic latent variable model in the form of a mixture of constrained Gaussian mixtures. From this model, we derive an transform coder design algorithm, which integrates optimization of all transform coder parameters. An essential part this algorithm is our introduction of a new transform basis-the coding optimal transform-which, unlike commonly used transforms, minimizes compression distortion. Adaptive transform coders can be effective for compressing databases of related imagery since the high overhead associated with these coders can be amortized over the entire database. For this work, we performed compression experiments on a database of synthetic aperture radar images. Our results show that adaptive coders improve compressed signal-to-noise ratio (SNR) by approximately 0.5 dB compared with global coders. Coders that incorporated the coding optimal transform had the best SNRs on the images used to develop the coder. However, coders that incorporated the discrete cosine transform generalized better to new images.  相似文献   

12.
A Hi-Fi audio codec with an improved adaptive transform coding (ATC) algorithm is presented using digital signal processors (DSPs). An audio signal with a 20 kHz bandwidth sampled at 48 kHz is coded at a rate of 128 kb/s. The algorithm utilizes adaptive block size selection, which is effective for preecho suppression. A modified discrete cosine transform (MDCT) with a simple window set is employed to reduce block boundary noise without decreasing the performance of transform coding. In addition, a fast MDCT calculation algorithm, based on a fast Fourier transform, is adopted. Weighted bit allocation is employed to quantize the transformed coefficients. The codec was realized by a multiprocessor system composed of newly developed DSP boards. Subjective tests with the codec show that the coding quality is comparable to that of compact disc signals  相似文献   

13.
A codebook sharing technique, called constrained storage vector quantization (CSVQ), is introduced. This technique offers a convenient and optimal way of trading off performance against storage. The technique can be used in conjunction with tree-structured vector quantization (VQ) and other structured VQ techniques that alleviate the search complexity obstacle. The effectiveness of CSVQ is illustrated for coding transform coefficients of audio signals with multistage VQ  相似文献   

14.
The transform coding of images is analyzed from a common standpoint in order to generate a framework for the design of optimal transforms. It is argued that all transform coders are alike in the way they manipulate the data structure formed by transform coefficients. A general energy compaction measure is proposed to generate optimized transforms with desirable characteristics particularly suited to the simple transform coding operation of scalar quantization and entropy coding. It is shown that the optimal linear decoder (inverse transform) must be an optimal linear estimator, independent of the structure of the transform generating the coefficients. A formulation that sequentially optimizes the transforms is presented, and design equations and algorithms for its computation provided. The properties of the resulting transform systems are investigated. In particular, it is shown that the resulting basis are nonorthogonal and complete, producing energy compaction optimized, decorrelated transform coefficients. Quantization issues related to nonorthogonal expansion coefficients are addressed with a simple, efficient algorithm. Two implementations are discussed, and image coding examples are given. It is shown that the proposed design framework results in systems with superior energy compaction properties and excellent coding results.  相似文献   

15.
The numerical techniques of transform image coding are well known in the image bandwidth compression literature. This concise paper presents a new transform method in which the singular values and singular vectors of an image are computed and transmitted instead of transform coefficients. The singular value decomposition (SVD) method is known to be the deterministically optimal transform for energy compaction [2]. A systems implementation is hypothesized, and a variety of coding strategies is developed. Statistical properties of the SVD are discussed and a self adaptive set of experimental results is presented, Imagery compressed to 1, 1.5, and 2.5 bits per pixel with less than 1.6, 1, and 1/3 percent, respective mean-square error is displayed. Finally, additional image coding scenarios are postulated for further consideration.  相似文献   

16.
A new unitary transform called the slant transform, specifically designed for image coding, has been developed. The transformation possesses a discrete sawtoothlike basis vector which efficiently represents linear brightness variations along an image line. A fast computational algorithm has been found for the transformation. The slant transformation has been utilized in several transform image-coding systems for monochrome and color images. Computer simulation results indicate that good quality coding can be accomplished with about 1 to 2 bits/pixel for monochrome images and 2 to 3 bits/pixel for color images.  相似文献   

17.
Adaptive vector transform quantization (AVTQ) as a coding system is discussed. The optimal bit assignment is derived based on vector quantization asymptotic theory for different PDFs (probability density functions) of the transform coefficients. Strategies for shaping the quantization noise spectrum and for adapting the bit assignment to the changes in the speech statistics are discussed. A good estimate of the efficiency of any coding system is given by the system coding gain over scalar PCM (pulse code modulation). Based on the optimal bit allocation, the coding gain of the vector transform quantization (VTQ) system operating on a stationary input signal is derived. The VTQ coding gain demonstrates a significant advantage of vector quantization over scalar quantization within the framework of transform coding. System simulation results are presented for a first-order Gauss-Markov process and for typical speech waveforms. The results of fixed and adaptive systems are compared for speech input. Also, the AVTQ results are compared to known scalar speech coding systems  相似文献   

18.
We consider a feedback communication system in which the forward and feedback channels are disturbed by additive noise and constrained in average power. Two block coding schemes are proposed in which the signals are orthogonal waveforms. A finite number of forward and feedback transmissions per message is made. Information received over the feedback channel is used to reduce the expected value of forward signal energy on all iterations after the first. Similarly, the expected value of feedback signal energy is reduced on all iterations after the first. These schemes, which are modifications of a feedback coding scheme due to Kramer, obtain improved error performance over one-way coding at all rates up to the forward channel capacity, provided only that the feedback channel capacity is greater than the forward channel capacity. They require less feedback power than existing feedback coding schemes to achieve a given error performance.  相似文献   

19.
针对变换域通信系统传统幅度谱采用二元编码方式没有充分利用干扰频谱信息且结构设计固定单一的缺点,提出了一种新型的最优误码率多元幅度谱编码算法,该算法通过理论推导分析变换域通信系统误码率、幅度谱和干扰频谱信息的动态关系式,将幅度谱编码转化成约束条件多维最优化问题,利用拉格朗日函数构建增广目标函数,通过迭代求解出多元幅度谱最优解。仿真结果表明:该多元幅度谱编码算法充分考虑到干扰频谱信息且不影响基函数的相关特性,相比较传统二元幅度谱编码算法,系统抗干扰性能提升约3dB,有效降低系统误码率,并且当干信比为35dB时,有效缓解系统误码率的平底现象,提高了系统的抗干扰性能。   相似文献   

20.
A parallel constrained coding scheme is considered where p-blocks of raw data are encoded simultaneously into q tracks such that the contents of each track belong to a given constraint S. It is shown that as q increases, there are parallel block-decodable encoders for S whose coding ratio p/q converges to the capacity of S. Examples are provided where parallel coding allows block-decodable encoders, while conventional coding, at the same rate, does not. Parallel encoders are then applied as building blocks in the construction of block-decodable encoders for certain families of two-dimensional constraints  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号