共查询到20条相似文献,搜索用时 15 毫秒
1.
We propose a distortion optimal rate allocation algorithm for robust transmission of embedded bitstreams over noisy channels. The algorithm is based on the backward application of a Viterbi-like algorithm to a search trellis, and can be applied to both scenarios of fixed and variable channel packet length problems, referred to as FPP and VPP, respectively. For the VPP, the complexity of the algorithm is comparable to the well-known dynamic programming approach of Chande and Farvardin. For the FPP, where no low-complexity algorithm is known, the complexity of the proposed algorithm is O(N/sup 2/), where N is the number of transmitted packets. 相似文献
2.
汪涛 《智能计算机与应用》2017,7(3)
数字图像处理技术的一个非常重要的环节是数字图像压缩技术.其在图像保存和图像传输领域拥有无可替代的作用.矢量量化方法是图像压缩技术中一个重要的组成部分.通过有效地转换图像中像素点,从而高效地对图像进行压缩,为图像的存储和传输提供了可靠的保证.本文详细研究了率失真模型在图像压缩领域的应用.通过高效地结合量化失真和编码码率,从而得到在码率一定的条件,相对更高质量的图像.通过对码率分配策略的研究,根据码率和失真二者的关系推导出矢量量化小波图像编码器的率失真模型,同时将该模型与拉格朗日极值算法有机地结合在一起,从而得到图像不同子带的最优码率分配.本文实验结果显示,根据图像的不同子带的特性,应用本文所提出的的图像率失真模型,可以得到较高质量的还原图像.因此可以得出,通过应用本文的率失真模型,在还原图像的还原质量不下降的条件下,得到更低的图像压缩码率. 相似文献
3.
Chii-Jen Chung Sin-Horng Chen 《Communications, IEEE Transactions on》1994,42(6):2215-2218
A VFR LPC vocoder using optimal interpolation is presented in this paper. In the encoder, some representative frames of an utterance are selected for transmission. In the decoder, LPC parameters of all untransmitted frames are restored by optimal interpolation. Simulation results show that this coding scheme outperforms the conventional VFR vocoder using linear interpolation. By incorporating contour quantization of gain and pitch information, a low variable-rate LPC vocoder is realized. An informal listening test shows that very high intelligible reconstructed speech was obtained at an average data rate of 300 bps 相似文献
4.
《Electronics letters》2008,44(25):1458-1459
Rate distortion optimisation (RDO), which is the best performing mode decision method for H.264, does not consider dependence between neighbouring blocks, because calculating rate distortion (RD) costs for all possible mode sets requires too many computations. However, in H.264 intra coding, RD performance of the current block is affected by reference pixels in neighbouring blocks. Thus, considering dependence between neighbouring blocks can enhance RD performance in mode decision. Proposed is an adaptive RDO for H.264 intra coding. To consider block dependence with only a few more computations, a mode decision criterion is proposed for I4 x 4 using the partial boundary sum of the squared differences. Experimental results show that the proposed method improves RD performance compared to RDO. 相似文献
5.
《Signal Processing: Image Communication》2009,24(5):368-383
The problem of enabling robust video transmission over lossy networks has become increasingly important because of the growing interest in video delivery over unreliable channels such as wireless networks. The more the coding process relies on an intensive use of prediction to improve the coding gain, the more the reconstructed sequence proves to be sensitive to information losses. As a matter of fact, it is necessary to introduce some redundant data in order to increase the robustness of the coded bit stream. A possible solution can be found filling a matrix structure with RTP packets and applying a Forward Error Correction (FEC) code on its rows. However, the matrix size and the chosen FEC code affect the performance of the coding system. The paper proposes a novel adaptation technique that tunes the amount of redundant information included in the packet stream and differs from previously proposed solutions since it relies on the percentage of null quantized transform coefficients in place of the activity or the Mean Square Error (MSE). This strategy is then integrated in a joint source-channel coder rate allocation algorithm that shares the available bits between the H.264/AVC coder and the channel coder according to the significance of the frame in the decoding process. Experimental results show that the presented approach significantly improves the quality of the reconstructed sequences at the decoder with respect to activity-based strategies and requires a low computational complexity. 相似文献
6.
Aisheng Yang Huanqiang Zeng Jing Chen Jianqing Zhu Canhui Cai 《Multidimensional Systems and Signal Processing》2017,28(4):1249-1266
With the advances in understanding perceptual properties of the human visual system, perceptual video coding, which aims to incorporate human perceptual mechanisms into video coding for maximizing the perceptual coding efficiency, becomes an essential research topic. Since the newest video coding standard—high efficiency video coding (HEVC) does not fully consider the perceptual characteristic of the input video, a perceptual feature guided rate distortion optimization (RDO) method is presented to improve its perceptual coding performance in this paper. In the proposed method, for each coding tree unit, the spatial perceptual feature (i.e., gradient magnitude ratio) and the temporal perceptual feature (i.e., gradient magnitude similarity deviation ratio) are extracted by considering the spatial and temporal perceptual correlations. These perceptual features are then utilized to guide the RDO process by perceptually adjusting the corresponding Lagrangian multiplier. By incorporating the proposed method into the HEVC, extensive simulation results have demonstrated that the proposed approach can significantly improve the perceptual coding performance and obtain better visual quality of the reconstructed video, compared with the original RDO in HEVC. 相似文献
7.
Yonghong Hou Pichao Wang Wei Xiang Zhimin Gao Chunping Hou 《Signal, Image and Video Processing》2015,9(4):875-884
Rate control algorithms (RCAs) aim to achieve the best visual quality under the minimum bit rate and the limited buffer size. A self-parameter-tuning fuzzy-PID controller is proposed to reduce the deviation between the target buffer level and the current buffer fullness. Fuzzy logic is used to tune each parameter of the proportional-integral-derivative controller by selecting appropriate fuzzy rules through simulation in H.264/advanced video coding (AVC). To control the quality fluctuation between consecutive frames, a quality controller is adopted. The proposed RCA has been implemented in an H.264/AVC video codec, and our experimental results show that the proposed algorithm achieves smooth target bits while enabling better buffer control and visual quality. 相似文献
8.
A modified SPIHT algorithm for image coding with a joint MSE and classification distortion measure. 总被引:1,自引:0,他引:1
The set partitioning in hierarchical trees (SPIHT) algorithm is an efficient wavelet-based progressive image-compression technique, designed to minimize the mean-squared error (MSE) between the original and decoded imagery. However, the MSE-based distortion measure is not in general well correlated with image-recognition quality, especially at low bit rates. Specifically, low-amplitude wavelet coefficients that may be important for classification are given low priority by conventional SPIHT. In this paper, we use the kernel matching pursuits (KMP) method to autonomously estimate the importance of each wavelet subband for distinguishing between different textures, with textural segmentation first performed via a hidden Markov tree. Based on subband importance determined via KMP, we scale the wavelet coefficients prior to SPIHT coding, with the goal of minimizing a Lagrangian distortion based jointly on the MSE and classification error. For comparison we consider Bayes tree-structured vector quantization (B-TSVQ), also designed to obtain a tradeoff between MSE and classification error. The performances of the original SPIHT, the modified SPIHT, and B-TSVQ are compared. 相似文献
9.
Yeung R.W. 《IEEE transactions on information theory / Professional Technical Group on Information Theory》1995,41(2):412-422
In a Diversity Coding System, an information source is encoded by a number of encoders. There are a number of decoders, each of which can access a certain subset of the encoders. We study a diversity coding problem in which there are two levels of decoders. The reconstructions of the source by decoders within the same level are identical, and are subject to the same distortion criterion. Our results imply a principle of superposition when the source consists of two independent data streams. Practical codes achieving zero error can easily be constructed for this special case. A class of open problems on this topic is also suggested 相似文献
10.
The Huffman code in practice suffers from two problems: the prior knowledge of the probability distribution of the data source to be encoded is necessary, and the encoded data propagate errors. The first problem can be solved by adaptive coding, while the second problem can be partly solved by segmenting data into segments. However, the adaptive Huffman code performs badly when segmenting data into relatively small segments because of its relatively slow adaptability. A fast-adaptive coding algorithm which tracks the local data statistics more quickly, thus yielding better compression efficiency, is given 相似文献
11.
《现代电子技术》2019,(11):32-35
2维PV编码算法是一种将线性预测编码、SOM神经网络2维矢量编码以及Huffman编码相结合的语音信号编码算法。为了在保证译码恢复的语音质量良好的前提下,进一步减小编码的压缩率,以减小语音信号的存储空间,提出一种增加2维PV编码中矢量量化维数的高维PV编码算法。并利用Matlab软件编程进行2维、4维和8维PV算法的语音信号编解码实验。实验结果表明,在保证译码恢复声音质量良好的条件下,增加2维PV编码算法的量化矢量维数,可以减小码率,其中8维PV编码算法的码率最小为5.94 Kb/s,小于采用ADPCM编码算法的波形编码标准G.721的码率32 Kb/s(波形编码的最小码率),甚至小于采用LD-CELP编码算法的混合编码G.728的码率16 Kb/s。文中提出的编码算法在语言压缩编码方面将具有较高的研究价值和很好的应用前景。 相似文献
12.
F. Pan Z. G. Li K. P. Lim D. J. Wu R. S. Yu 《Multidimensional Systems and Signal Processing》2007,18(1):5-15
With the recent development of third-generation communication technologies, low power video coding system (such as PDA, Handphone
or system on chip) has found wide applications such as live video using a PDA and sharing it among friends, etc. However,
video coding in a low power system has two major hurdles to overcome: (1) In a low power system, video coding needs to meet
the rigorous constraints of the available memory and computational capacity; (2) In a low power system, the computational
power allocated to video coding may vary drastically (in bursts). In this paper, a new adaptive rate control algorithm is
proposed for low power video coding system. This adaptive rate control scheme takes into account the time constraint of a
low power system, and its bit allocation depends not only on the available data bits, but more importantly, on the available
coding time. Experimental results show that, compared to the existing rate control scheme, the new algorithm can always achieve
the maximum frame rate, maximize the utilization of the available bandwidth and computing power, increase the average PSNR,
and improve the subjective perceptual quality of the reconstructed video. 相似文献
13.
View-dependent 3-D mesh coding by rate allocation with the image rendering-based distortion measures
Low-bandwidth transmission of synthetic digital content to the end user device in the form of a scene of 3-D meshes requires efficient compression of the mesh geometry. For applications in which the meshes are observed from a single viewpoint, this work explores the use of the image rendering-based distortion measures in rate allocation to their surface regions for view-dependent mesh geometry compression. It is experimentally demonstrated that the image rendering-based distortion measures yield far superior performance (the quality of the rendered image of the reconstructed scene from a viewpoint at a given rate) in optimal rate allocation than other previously proposed distortion measures. A fast rate allocation method is also proposed for use with the image rendering-based measures for real-time or interactive applications. Not only does this method have significantly lower complexity than the optimal rate allocation method due to the rendering of the images of the reconstructed meshes at only judiciously selected rate–distortion operating points, but also its coding performance is just as competitive. Further complexity reduction in rate allocation, through rendering of only the coded regions of the meshes, is also investigated. 相似文献
14.
An efficient coding algorithm for the compression of ECG signals using the wavelet transform 总被引:2,自引:0,他引:2
Rajoub BA 《IEEE transactions on bio-medical engineering》2002,49(4):355-362
A wavelet-based electrocardiogram (ECG) data compression algorithm is proposed in this paper. The ECG signal is first preprocessed, the discrete wavelet transform (DWT) is then applied to the preprocessed signal. Preprocessing guarantees that the magnitudes of the wavelet coefficients be less than one, and reduces the reconstruction errors near both ends of the compressed signal. The DWT coefficients are divided into three groups, each group is thresholded using a threshold based on a desired energy packing efficiency. A binary significance map is then generated by scanning the wavelet decomposition coefficients and outputting a binary one if the scanned coefficient is significant, and a binary zero if it is insignificant. Compression is achieved by 1) using a variable length code based on run length encoding to compress the significance map and 2) using direct binary representation for representing the significant coefficients. The ability of the coding algorithm to compress ECG signals is investigated, the results were obtained by compressing and decompressing the test signals. The proposed algorithm is compared with direct-based and wavelet-based compression algorithms and showed superior performance. A compression ratio of 24:1 was achieved for MIT-BIH record 117 with a percent root mean square difference as low as 1.08%. 相似文献
15.
16.
ECG coding by wavelet-based linear prediction 总被引:7,自引:0,他引:7
Presents a novel coding scheme for the electrocardiogram (ECG). Following beat delineation, the periods of the beats are normalized by multirate processing. After amplitude normalization, a discrete wavelet transform is applied to each beat. Due to the period and amplitude normalization, the wavelet transform coefficients bear a high correlation across beats at identical locations. To increase the compression ratio, the residual sequence obtained after linear prediction of the significant wavelet coefficients is transmitted to the decoder. The difference between the actual period and the mean beat period, and that between the actual scale factor and the average amplitude scale factor are also transmitted for each beat. At the decoder, the inverse wavelet transform is computed from the reconstructed wavelet transform coefficients. The original amplitude and period of each beat are then recovered. The approximation achieved, at an average rate of 180 b/s, is of high quality. The authors have evaluated the normalized maximum amplitude error and its position in each cycle, in addition to the normalized root mean square error. The significant feature of the proposed technique is that, while the error is nearly uniform throughout the cycle, the diagnostically crucial QRS region is kept free of maximal reconstruction error 相似文献
17.
《IEEE transactions on information theory / Professional Technical Group on Information Theory》1980,26(5):518-521
The output of a source, the first of the two arguments of a distortion function, is seen by an encoder through a noisy channel. A decoder sees the encoder's signal through the usual communication channel. The second argument of the distortion function is obtained from the decoder's output via another noisy channel. The Dobrushin-Tsybakov reduction of this problem to a direct one is shown to follow at once from a "disconnection principle" for conditional expectations. The same principle applies to more general situations such as i) dependence between the noise variables acting in the input and output channels and ii) side information available only at the decoder. 相似文献
18.
Adaptive image coding with perceptual distortion control 总被引:6,自引:0,他引:6
This paper presents a discrete cosine transform (DCT)-based locally adaptive perceptual image coder, which discriminates between image components based on their perceptual relevance for achieving increased performance in terms of quality and bit rate. The new coder uses a locally adaptive perceptual quantization scheme based on a tractable perceptual distortion metric. Our strategy is to exploit human visual masking properties by deriving visual masking thresholds in a locally adaptive fashion. The derived masking thresholds are used in controlling the quantization stage by adapting the quantizer reconstruction levels in order to meet the desired target perceptual distortion. The proposed coding scheme is flexible in that it can be easily extended to work with any subband-based decomposition in addition to block-based transform methods. Compared to existing perceptual coding methods, the proposed perceptual coding method exhibits superior performance in terms of bit rate and distortion control. Coding results are presented to illustrate the performance of the presented coding scheme. 相似文献
19.
Hans Georg Musmann 《Signal Processing: Image Communication》1995,7(4-6):267-278
Known coding techniques for transmitting moving images at very low bit rates are explained by the source models on which these coding techniques are based. It is shown that with motion-compensated hybrid coding, object-based analysis-synthesis coding, knowledge-based coding and semantic coding, there is a consistent development of source models. In consequence these coding techniques can be combined in a layered coding system. From experimental results obtained for object-based analysis-synthesis, coding estimates for the coding efficiency of such a layered coding system are derived using head and shoulder video telephone test sequences. It is shown that an additional compression factor of about 3 can be expected with such a complex layered coding system, when compared to block-based hybrid coding. 相似文献
20.
提出了一种基于平滑双正交小波和自适应分割算法的小波域分形图像编码算法,在基于离散有限方差(DFV)最优准则下得到了适合图像编码的一种新的平滑双正交小波,从而改善了分块效应。在小波域的分形编码中,提出了一种基于图像信息分布特征的自适应分割算法,实验表明,该文算法在相同压缩比的情况下,解码图像的主观视觉质量和峰值信噪比都明显优于SQS方法、基本分形图像编码方法和SPIHT方法。 相似文献