首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Wavelet-based lossless compression of coronary angiographic images   总被引:6,自引:0,他引:6  
The final diagnosis in coronary angiography has to be performed on a large set of original images. Therefore, lossless compression schemes play a key role in medical database management and telediagnosis applications. This paper proposes a wavelet-based compression scheme that is able to operate in the lossless mode. The quantization module implements a new way of coding of the wavelet coefficients that is more effective than the classical zerotree coding. The experimental results obtained on a set of 20 angiograms show that the algorithm outperforms the embedded zerotree coder, combined with the integer wavelet transform, by 0.38 bpp, the set partitioning coder by 0.21 bpp, and the lossless JPEG coder by 0.71 bpp. The scheme is a good candidate for radiological applications such as teleradiology and picture archiving and communications systems (PACS's).  相似文献   

2.
Differentiation applied to lossless compression of medical images   总被引:1,自引:0,他引:1  
Lossless compression of medical images using a proposed differentiation technique is explored. This scheme is based on computing weighted differences between neighboring pixel values. The performance of the proposed approach, for the lossless compression of magnetic resonance (MR) images and ultrasonic images, is evaluated and compared with the lossless linear predictor and the lossless Joint Photographic Experts Group (JPEG) standard. The residue sequence of these techniques is coded using arithmetic coding. The proposed scheme yields compression measures, in terms of bits per pixel, that are comparable with or lower than those obtained using the linear predictor and the lossless JPEG standard, respectively, with 8-b medical images. The advantages of the differentiation technique presented here over the linear predictor are: 1) the coefficients of the differentiator are known by the encoder and the decoder, which eliminates the need to compute or encode these coefficients, and 21 the computational complexity is greatly reduced. These advantages are particularly attractive in real time processing for compressing and decompressing medical images.  相似文献   

3.
The lossless compression of AVIRIS images by vector quantization   总被引:15,自引:0,他引:15  
The structure of hyperspectral images reveals spectral responses that would seem ideal candidates for compression by vector quantization. This paper outlines the results of an investigation of lossless vector quantization of 224-band Airborne/Visible Infrared imaging Spectrometer (AVIRIS) images. Various vector formation techniques are identified and suitable quantization parameters are investigated. A new technique, mean-normalized vector quantization (M-NVQ), is proposed which produces compression performances approaching the theoretical minimum compressed image entropy of 5 bits/pixel. Images are compressed from original image entropies of between 8.28 and 10.89 bits/pixel to between 4.83 and 5.90 bits/pixel  相似文献   

4.
Multidimensional Systems and Signal Processing - Filtering based compression methods have become a popular research topic in lossless compression of hyperspectral images. Recursive least squares...  相似文献   

5.
Recently, several efficient context-based arithmetic coding algorithms have been developed successfully for lossless compression of error-diffused images. In this paper, we first present a novel block- and texture-based approach to train the multiple-template according to the most representative texture features. Based on the trained multiple template, we next present an efficient texture- and multiple-template-based (TM-based) algorithm for lossless compression of error-diffused images. In our proposed TM-based algorithm, the input image is divided into many blocks and for each block, the best template is adaptively selected from the multiple-template based on the texture feature of that block. Under 20 testing error-diffused images and the personal computer with Intel Celeron 2.8-GHz CPU, experimental results demonstrate that with a little encoding time degradation, 0.365 s (0.901 s) on average, the compression improvement ratio of our proposed TM-based algorithm over the joint bilevel image group (JBIG) standard [over the previous block arithmetic coding for image compression (BACIC) algorithm proposed by Reavy and Boncelet is 24%] (19.4%). Under the same condition, the compression improvement ratio of our proposed algorithm over the previous algorithm by Lee and Park is 17.6% and still only has a little encoding time degradation (0.775 s on average). In addition, the encoding time required in the previous free tree-based algorithm is 109.131 s on average while our proposed algorithm takes 0.995 s; the average compression ratio of our proposed TM-based algorithm, 1.60, is quite competitive to that of the free tree-based algorithm, 1.62.  相似文献   

6.
A lossless compression scheme for Bayer color filter array images.   总被引:1,自引:0,他引:1  
In most digital cameras, Bayer color filter array (CFA) images are captured and demosaicing is generally carried out before compression. Recently, it was found that compression-first schemes outperform the conventional demosaicing-first schemes in terms of output image quality. An efficient prediction-based lossless compression scheme for Bayer CFA images is proposed in this paper. It exploits a context matching technique to rank the neighboring pixels when predicting a pixel, an adaptive color difference estimation scheme to remove the color spectral redundancy when handling red and blue samples, and an adaptive codeword generation technique to adjust the divisor of Rice code for encoding the prediction residues. Simulation results show that the proposed compression scheme can achieve a better compression performance than conventional lossless CFA image coding schemes.  相似文献   

7.
刘彦  王玲 《信息技术》2004,28(4):46-48
指纹图像的无损压缩解决方法,是从指纹数据压缩的基础开始分析,对已经采集的指纹数据进行DPCM线性预测,随后采取霍夫曼编码处理,通过信道后,进行霍夫曼解码和DPCM预测解码。在完成整个编解码的过程中,突破了传统的DPCM线性预测方法,针对指纹图像的纹理特征,构出指纹图像特有的方向图,借助于各个象素的方向向量进行线性预测。实验结果表明引入了方向向量,将增强预测的效果,大幅度的提高压缩比。数据和图像证明了这一方法的实用性和合理性。  相似文献   

8.
This paper presents a study of lossless image compression of fullband and subband images using predictive coding. The performance of a number of different fixed and adaptive predictors are evaluated to establish the relative performance of different predictors at various resolutions and to give an indication of the achievable image resolution for given bit rates. In particular, the median adaptive predictor is compared with two new classes of predictors proposed in this paper. One is based on the weighted median filter, while the other uses context modelling to select the optimum from a set of predictors. A graphical tool is also proposed to analyse the prediction methods. Simulations of the different predictors for a variety of real world and medical images, evaluated both numerically and graphically, show the superiority of median based prediction over this proposed implementation of context model based prediction, for all resolutions. The effects of different subband decomposition techniques are also explored.  相似文献   

9.
The gradient adjusted predictor (GAP) uses seven fixed slope quantization bins and a predictor is associated with each bin, for prediction of pixels. The slope bin boundary in the same appears to be fixed without employing a criterion function. This paper presents a technique for slope classification that results in slope bins which are optimum for a given set of images. It also presents two techniques that find predictors which are statistically optimal for each of the slope bins. Slope classification and the predictors associated with the slope bins are obtained off-line. To find a representative predictor for a bin, a set of least-squares (LS) based predictors are obtained for all the pixels belonging to that bin. A predictor, from the set of predictors, that results in the minimum prediction error energy is chosen to represent the bin. Alternatively, the predictor is chosen, from the same set, based on minimum entropy as the criterion. Simulation results, of the proposed method have shown a significant improvement in the compression performance as compared to the GAP. Computational complexity of the proposed method , excluding the training process, is of the same order as that of GAP.  相似文献   

10.
无失真图像压缩的目的是在保证不丢失任何信息的情况之下,用尽可能少的比特数表示图像.图像压缩对数据存储量的降低具有重要的意义.所设计的激光散斑图像无失真编码器由激光散斑位移估计、像素预测和哥朗布(Golomb)编码所组成.首先,估计散斑位移;然后,根据激光动态散斑相关函数设计预测模型,并以此为基础进行像素预测;最后,对预...  相似文献   

11.
The current international standard, the Joint Bilevel Image Experts Group (JBIG), is a representative of a bilevel image compression algorithm. It compresses bilevel images with high performance, but it shows relatively low performance in compressing error-diffused halftone images. This paper proposes a new bilevel image compression for error-diffused images, which is based on Bayes' theorem. The proposed coding procedure consists of two passes. It groups 2 /spl times/ 2 dots into a cell, where each cell is represented by the number of black dots and the locations of the black dots in the cell. The number of black dots in the cell is encoded in the first pass, and their locations are encoded in the second pass. The first pass performs a near-lossless compression, which can be refined to be lossless by the second pass. Experimental results show a high compression performance for the proposed method when it is applied to error-diffused images.  相似文献   

12.
A two-stage method for compressing bilevel images is described that is particularly effective for images containing repeated subimages, notably text. In the first stage, connected groups of pixels, corresponding approximately to individual characters, are extracted from the image. These are matched against an adaptively constructed library of patterns seen so far, and the resulting sequence of symbol identification numbers is coded and transmitted. From this information, along with the library itself and the offset from one mark to the next, an approximate image can be reconstructed. The result is a lossy method of compression that outperforms other schemes. The second stage employs the reconstructed image as an aid for encoding the original image using a statistical context-based compression technique. This yields a total bandwidth for exact transmission appreciably undercutting that required by other lossless binary image compression methods. Taken together, the lossy, and lossless methods provide an effective two-stage progressive transmission capability for textual images which has application for legal, medical, and historical purposes, and to archiving in general  相似文献   

13.
We propose and evaluate a number of novel improvements to the mesh-based coding scheme for 3-D brain magnetic resonance images. This includes: 1) elimination of the clinically irrelevant background leading to meshing of only the brain part of the image; 2) content-based (adaptive) mesh generation using spatial edges and optical flow between two consecutive slices; 3) a simple solution for the aperture problem at the edges, where an accurate estimation of motion vectors is not possible; and 4) context-based entropy coding of the residues after motion compensation using affine transformations. We address only lossless coding of the images, and compare the performance of uniform and adaptive mesh-based schemes. The bit rates achieved (about 2 bits per voxel) by these schemes are comparable to those of the state-of-the-art three-dimensional (3-D) wavelet-based schemes. The mesh-based schemes have been shown to be effective for the compression of 3-D brain computed tomography data also. Adaptive mesh-based schemes perform marginally better than the uniform mesh-based methods, at the expense of increased complexity.  相似文献   

14.
We propose a new hierarchical approach to resolution scalable lossless and near-lossless (NLS) compression. It combines the adaptability of DPCM schemes with new hierarchical oriented predictors to provide resolution scalability with better compression performances than the usual hierarchical interpolation predictor or the wavelet transform. Because the proposed hierarchical oriented prediction (HOP) is not really efficient on smooth images, we also introduce new predictors, which are dynamically optimized using a least-square criterion. Lossless compression results, which are obtained on a large-scale medical image database, are more than 4% better on CTs and 9% better on MRIs than resolution scalable JPEG-2000 (J2K) and close to nonscalable CALIC. The HOP algorithm is also well suited for NLS compression, providing an interesting rate-distortion tradeoff compared with JPEG-LS and equivalent or a better PSNR than J2K for a high bit rate on noisy (native) medical images.  相似文献   

15.
The authors describe two chips which form the basis of a high-speed lossless image compression/decompression system. They present the transform and coding algorithms and the main architectural features of the chips and outline some performance specifications. Lossless compression can be achieved by a transformation process followed by entropy coding. The two application-specific integrated circuits (ASICs) perform S-transform image decomposition and the Lempel-Ziv (L-Z) type of entropy coding. The S-transform, besides decorrelating the image, provides a convenient method of hierarchical image decomposition. The data compressor/decompressor IC is a fast and efficient implementation of the L-Z algorithm. The chips can be used independently or together for image compression  相似文献   

16.
Based on the property of binary arithmetic operations and the spatial probability distribution of the prediction error, a set of transformations is proposed for the lossless image compression. The bit plane coding technique is used in the transformed image. The transformations are easily implementable. Experimental results show that these transformations are useful.  相似文献   

17.
In this paper, the multilevel pattern matching (MPM) code for compression of one-dimensional (1D) sequences is first generalized to compress two-dimensional (2D) images, resulting in a 2D multilevel pattern matching (MPM) code. It is shown that among all images of n pixels, the worst case redundancy of the 2D MPM code against any finite-template-based arithmetic code is O(1//spl radic/logn). This result contrasts unfavorably with the fact that among all 1D sequences of length n, the MPM code has a worst case redundancy of O(1/logn) against any finite-state arithmetic code; this is caused by the so-called 2D boundary effect. To alleviate the 2D boundary effect, we extend the 2D MPM code to the case of context modeling, yielding a context-dependent 2D MPM code. It is shown that among all images of n pixels, the context-dependent 2D MPM code has an O(1/logn) worst case redundancy against any finite-template-based arithmetic code satisfying a mild condition; this redundancy is better than that of the 2D MPM code without context models. Experimental results demonstrate that the context-dependent 2D MPM code significantly outperforms the 2D MPM code without context models for bi-level images. It is also demonstrated that, in terms of compression rates, the context-dependent 2D MPM code performs significantly better than the progressive coding mode of JBIG1 for both textual and bi-level images, and better than or comparably to the sequential coding mode of JBIG1 and JBIG2. In addition to its excellent compression performance, the context-dependent 2D MPM code allows progressive transmission of images.  相似文献   

18.
Keating  S. Pelly  J. 《Electronics letters》2000,36(25):2070-2072
New algorithms for lossless compression of general data are presented. They are based on adaptive lossless data compression (ALDC) but offer improved compression, typically 24% better for image data. The algorithms are simple to implement and are capable of high data throughput, whilst maintaining compatibility with legacy ALDC bit streams  相似文献   

19.
An efficient preprocessing technique of arranging an electroencephalogram (EEG) signal in matrix form is proposed for real-time lossless EEG compression. The compression algorithm consists of an integer lifting wavelet transform as the decorrelator with set partitioning in hierarchical trees as the source coder. Experimental results show that the preprocessed EEG signal gave improved rate-distortion performance, especially at low bit rates, and less encoding delay compared to the conventional one-dimensional compression scheme.  相似文献   

20.
常用数据无损压缩算法分析   总被引:2,自引:0,他引:2  
在数据采集和数据传输系统中常运用数据压缩技术,数据压缩通常可分为无损压缩和有损压缩两种。结合常用数据无损压缩算法原理,给出了实现流程图,并着重讨论各算法的优缺点,最后分析了在实现与优化算法过程中需要注意的问题。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号