首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 437 毫秒
1.
两步双向查找表预测的高光谱图像无损压缩   总被引:1,自引:1,他引:0  
提出一种基于两步双向查找表预测的高光谱图像无损压缩算法。将谱段内预测和谱间预测有效地结合,去除了高光谱图像强的谱间相关性。根据高光谱图像特点,首先,在光谱线的第1谱段图像采用JPEG-LS中值预测器进行谱段内预测,其它谱段图像采用谱间预测。谱间预测采用两步双向预测算法,第1步预测,采用一种双向四阶预测器,利用该预测器得到参考预测值;第2步预测,采用一种8级查表(LUT)搜索预测算法,得出8个LUT预测值。然后,将参考预测值与其比较得出最终的预测值。最后,将预测差值进行熵编码。实验结果表明,本文算法的平均压缩比达到3.05bpp(bits per pixel),与传统高光谱图像无损压缩算法比较,平均压缩比提高了0.14~2.91bpp,有效提高了高光谱图像无损压缩比低的问题。  相似文献   

2.
Edge-directed prediction for lossless compression of natural images   总被引:13,自引:0,他引:13  
This paper sheds light on the least-square (LS)-based adaptive prediction schemes for lossless compression of natural images. Our analysis shows that the superiority of the LS-based adaptation is due to its edge-directed property, which enables the predictor to adapt reasonably well from smooth regions to edge areas. Recognizing that LS-based adaptation improves the prediction mainly around the edge areas, we propose a novel approach to reduce its computational complexity with negligible performance sacrifice. The lossless image coder built upon the new prediction scheme has achieved noticeably better performance than the state-of-the-art coder CALIC with moderately increased computational complexity  相似文献   

3.
无失真图像压缩的目的是在保证不丢失任何信息的情况之下,用尽可能少的比特数表示图像.图像压缩对数据存储量的降低具有重要的意义.所设计的激光散斑图像无失真编码器由激光散斑位移估计、像素预测和哥朗布(Golomb)编码所组成.首先,估计散斑位移;然后,根据激光动态散斑相关函数设计预测模型,并以此为基础进行像素预测;最后,对预...  相似文献   

4.
In this paper, an adaptive predictive multiplicative autoregressive (APMAR) method is proposed for lossless medical image coding. The adaptive predictor is used for improving the prediction accuracy of encoded image blocks in our proposed method. Each block is first adaptively predicted by one of the seven predictors of the JPEG lossless mode and a local mean predictor. It is clear that the prediction accuracy of an adaptive predictor is better than that of a fixed predictor. Then the residual values are processed by the MAR model with Huffman coding. Comparisons with other methods [MAR, SMAR, adaptive JPEG (AJPEG)] on a series of test images show that our method is suitable for reversible medical image compression.  相似文献   

5.
This paper presents a study of lossless image compression of fullband and subband images using predictive coding. The performance of a number of different fixed and adaptive predictors are evaluated to establish the relative performance of different predictors at various resolutions and to give an indication of the achievable image resolution for given bit rates. In particular, the median adaptive predictor is compared with two new classes of predictors proposed in this paper. One is based on the weighted median filter, while the other uses context modelling to select the optimum from a set of predictors. A graphical tool is also proposed to analyse the prediction methods. Simulations of the different predictors for a variety of real world and medical images, evaluated both numerically and graphically, show the superiority of median based prediction over this proposed implementation of context model based prediction, for all resolutions. The effects of different subband decomposition techniques are also explored.  相似文献   

6.
Wavelet-based lossless compression of coronary angiographic images   总被引:6,自引:0,他引:6  
The final diagnosis in coronary angiography has to be performed on a large set of original images. Therefore, lossless compression schemes play a key role in medical database management and telediagnosis applications. This paper proposes a wavelet-based compression scheme that is able to operate in the lossless mode. The quantization module implements a new way of coding of the wavelet coefficients that is more effective than the classical zerotree coding. The experimental results obtained on a set of 20 angiograms show that the algorithm outperforms the embedded zerotree coder, combined with the integer wavelet transform, by 0.38 bpp, the set partitioning coder by 0.21 bpp, and the lossless JPEG coder by 0.71 bpp. The scheme is a good candidate for radiological applications such as teleradiology and picture archiving and communications systems (PACS's).  相似文献   

7.
A simple and adaptive lossless compression algorithm is proposed for remote sensing image compression, which includes integer wavelet transform and the Rice entropy coder. By analyzing the probability distribution of integer wavelet transform coefficients and the characteristics of Rice entropy coder, the divide and rule method is used for high-frequency sub-bands and low-frequency one. High-frequency sub-bands are coded by the Rice entropy coder, and low-frequency coefficients are predicted before coding. The role of predictor is to map the low-frequency coefficients into symbols suitable for the entropy coding. Experimental results show that the average Comprcssion Ratio (CR) of our approach is about two, which is close to that of JPEG 2000. The algorithm is simple and easy to be implemented in hardware. Moreover, it has the merits of adaptability, and independent data packet. So the algorithm can adapt to space lossless compression applications.  相似文献   

8.
Context-based, adaptive, lossless image coding   总被引:3,自引:0,他引:3  
We propose a context-based, adaptive, lossless image codec (CALIC). The codec obtains higher lossless compression of continuous-tone images than other lossless image coding techniques in the literature. This high coding efficiency is accomplished with relatively low time and space complexities. The CALIC puts heavy emphasis on image data modeling. A unique feature of the CALIC is the use of a large number of modeling contexts (states) to condition a nonlinear predictor and adapt the predictor to varying source statistics. The nonlinear predictor can correct itself via an error feedback mechanism by learning from its mistakes under a given context in the past. In this learning process, the CALIC estimates only the expectation of prediction errors conditioned on a large number of different contexts rather than estimating a large number of conditional error probabilities. The former estimation technique can afford a large number of modeling contexts without suffering from the context dilution problem of insufficient counting statistics as in the latter approach, nor from excessive memory use. The low time and space complexities are also attributed to efficient techniques for forming and quantizing modeling contexts  相似文献   

9.
At low bit rates, better coding quality can be achieved by downsampling the image prior to compression and estimating the missing portion after decompression. This paper presents a new algorithm in such a paradigm, based on the adaptive decision of appropriate downsampling directions/ratios and quantization steps, in order to achieve higher coding quality with low bit rates with the consideration of local visual significance. The full-resolution image can be restored from the DCT coefficients of the downsampled pixels so that the spatial interpolation required otherwise is avoided. The proposed algorithm significantly raises the critical bit rate to approximately 1.2 bpp, from 0.15-0.41 bpp in the existing downsample-prior-to-JPEG schemes and, therefore, outperforms the standard JPEG method in a much wider bit-rate scope. The experiments have demonstrated better PSNR improvement over the existing techniques before the critical bit rate. In addition, the adaptive mode decision not only makes the critical bit rate less image-independent, but also automates the switching coders in variable bit-rate applications, since the algorithm turns to the standard JPEG method whenever it is necessary at higher bit rates.  相似文献   

10.
A new model regarding the difference values of a DPCM coder as a composite source is presented. It can provide a decrease in the entropy of prediction errors and is especially useful in lossless coding and distortion limited compression. Application to image coding demonstrates its effectiveness  相似文献   

11.
Real-time rate-control for wavelet image coding requires characterization of the rate required to code quantized wavelet data. An ideal robust solution can be used with any wavelet coder and any quantization scheme. A large number of wavelet quantization schemes (perceptual and otherwise) are based on scalar dead-zone quantization of wavelet coefficients. A key to performing rate-control is, thus, fast, accurate characterization of the relationship between rate and quantization step size, the R-Q curve. A solution is presented using two invocations of the coder that estimates the slope of each R-Q curve via probability modeling. The method is robust to choices of probability models, quantization schemes and wavelet coders. Because of extreme robustness to probability modeling, a fast approximation to spatially adaptive probability modeling can be used in the solution, as well. With respect to achieving a target rate, the proposed approach and associated fast approximation yield average percentage errors around 0.5% and 1.0% on images in the test set. By comparison, 2-coding-pass rho-domain modeling yields errors around 2.0%, and post-compression rate-distortion optimization yields average errors of around 1.0% at rates below 0.5 bits-per-pixel (bpp) that decrease down to about 0.5% at 1.0 bpp; both methods exhibit more competitive performance on the larger images. The proposed method and fast approximation approach are also similar in speed to the other state-of-the-art methods. In addition to possessing speed and accuracy, the proposed method does not require any training and can maintain precise control over wavelet step sizes, which adds flexibility to a wavelet-based image-coding system.  相似文献   

12.
We study high-fidelity image compression with a given tight L(infinity) bound. We propose some practical adaptive context modeling techniques to correct prediction biases caused by quantizing prediction residues, a problem common to the existing DPCM-type predictive near-lossless image coders. By incorporating the proposed techniques into the near-lossless version of CALIC that is considered by many as the state-of-the-art algorithm, we were able to increase its PSNR by 1 dB or more and/or reduce its bit rate by 10% or more, more encouragingly, at bit rates around 1.25 bpp or higher, our method obtained competitive PSNR results against the best L(2)-based wavelet coders, while obtaining much smaller L(infinity) bound.  相似文献   

13.
In predictive image coding, the least squares (LS)-based adaptive predictor is noted as an efficient method to improve prediction result around edges. However pixel-by-pixel optimization of the predictor coefficients leads to a high coding complexity. To reduce computational complexity, we activate the LS optimization process only when the coding pixel is around an edge or when the prediction error is large. We propose a simple yet effective edge detector using only causal pixels. The system can look ahead to determine if the coding pixel is around an edge and initiate the LS adaptation to prevent the occurrence of a large prediction error. Our experiments show that the proposed approach can achieve a noticeable reduction in complexity with only a minor degradation in the prediction results.  相似文献   

14.
Context-based lossless interband compression-extending CALIC   总被引:14,自引:0,他引:14  
This paper proposes an interband version of CALIC (context-based, adaptive, lossless image codec) which represents one of the best performing, practical and general purpose lossless image coding techniques known today. Interband coding techniques are needed for effective compression of multispectral images like color images and remotely sensed images. It is demonstrated that CALIC's techniques of context modeling of DPCM errors lend themselves easily to modeling of higher-order interband correlations that cannot be exploited by simple interband linear predictors alone. The proposed interband CALIC exploits both interband and intraband statistical redundancies, and obtains significant compression gains over its intrahand counterpart. On some types of multispectral images, interband CALIC can lead to a reduction in bit rate of more than 20% as compared to intraband CALIC. Interband CALIC only incurs a modest increase in computational cost as compared to intraband CALIC.  相似文献   

15.
A new predictive coder, based on an estimation method which adapts to line and edge features in images, is described. Quantization of the prediction error is performed by a two-level adaptive scheme: an adaptive transform coder, and threshold coding in both transform and spatial domains. Control information, which determines the behavior of the predictor, is quantized using a simple variable rate technique. The results are improved by pre- and postfiltering using a related noncausal form of the estimator. Acceptable images have been produced in this way at bit rates of less than 0.5 bit/pixel.  相似文献   

16.
In this paper, we propose an improved DC prediction method for high-efficiency video coding (HEVC) intra coding. The technique involves the application of pixel-wise predictors rather than block-based predictor in the DC mode. In HEVC, the pixels of neighboring reconstructed blocks are used to support multiple directional spatial prediction modes and reduce spatial redundancy. For lossless coding, pixel-by-pixel differential pulse code modulation (DPCM) is applied to directional predictions. Consequently, residuals are reduced and coding efficiency is improved. Since DC prediction still employs block-based coding, pixel-wise DC prediction is needed for better coding efficiency. When compared to HEVC lossless intra coding, the proposed algorithm reduces the bit rate by 5.95%. Furthermore, with pixel-by-pixel DPCM, the average bit rate reduction is 10.64% when compared to HEVC lossless intra coding.  相似文献   

17.
Nonexpansive pyramid for image coding using a nonlinear filterbank   总被引:1,自引:0,他引:1  
A nonexpansive pyramidal decomposition is proposed for low-complexity image coding. The image is decomposed through a nonlinear filterbank into low- and highpass signals and the recursion of the filterbank over the lowpass signal generates a pyramid resembling that of the octave wavelet transform. The structure itself guarantees perfect reconstruction and we have chosen nonlinear filters for performance reasons. The transformed samples are grouped into square blocks and used to replace the discrete cosine transform (DCT) in the Joint Photographic Expert Group (JPEG) coder. The proposed coder has some advantages over the DCT-based JPEG: computation is greatly reduced, image edges are better encoded, blocking is eliminated, and it allows lossless coding.  相似文献   

18.
In this paper, a new wavelet transform image coding algorithm is presented. The discrete wavelet transform (DWT) is applied to the original image. The DWT coefficients are firstly quantized with a uniform scalar dead zone quantizer. Then the quantized coefficients are decomposed into four symbol streams: a binary significance map symbol stream, a binary sign stream, a position of the most significant bit (PMSB) symbol stream and a residual bit stream. An adaptive arithmetic coder with different context models is employed for the entropy coding of these symbol streams. Experimental results show that the compression performance of the proposed coding algorithm is competitive to other wavelet-based image coding algorithms reported in the literature.  相似文献   

19.
Low resolution region discriminator for wavelet coding   总被引:1,自引:0,他引:1  
Syed  Y.F. Rao  K.R. 《Electronics letters》2001,37(12):748-749
A wavelet block chain (WBC) method is used in the initial coding of the low-low subband created by a wavelet transform to separate and label homogenous regions of the image which require no additional overhead in the bitstream. This information is then used to enhance the coding performance in a modified wavelet based coder. This method uses a two stage ZTE/SPIHT entropy coder (called a homogenous connected-region interested ordered transmission coder) to create a bitstream with properties of progressive transmission, scalability, and perceptual optimisation after a minimum bit rate is reached. Simulation results show good scalable low bit rate (0.04-0.4 bpp) compression, comparable to a SPIHT coder, but with better perceptual quality due to use of the region based information acquired by the WBC method  相似文献   

20.
This paper presents a wavelet-based image coder that is optimized for transmission over the binary symmetric channel (BSC). The proposed coder uses a robust channel-optimized trellis-coded quantization (COTCQ) stage that is designed to optimize the image coding based on the channel characteristics. A phase scrambling stage is also used to further increase the coding performance and robustness to nonstationary signals and channels. The resilience to channel errors is obtained by optimizing the coder performance only at the level of the source encoder with no explicit channel coding for error protection. For the considered TCQ trellis structure, a general expression is derived for the transition probability matrix. In terms of the TCQ encoding rat and the channel bit error rate, and is used to design the COTCQ stage of the image coder. The robust nature of the coder also increases the security level of the encoded bit stream and provides a much more visually pleasing rendition of the decoded image. Examples are presented to illustrate the performance of the proposed robust image coder  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号