首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
The current international standard, the Joint Bilevel Image Experts Group (JBIG), is a representative of a bilevel image compression algorithm. It compresses bilevel images with high performance, but it shows relatively low performance in compressing error-diffused halftone images. This paper proposes a new bilevel image compression for error-diffused images, which is based on Bayes' theorem. The proposed coding procedure consists of two passes. It groups 2 /spl times/ 2 dots into a cell, where each cell is represented by the number of black dots and the locations of the black dots in the cell. The number of black dots in the cell is encoded in the first pass, and their locations are encoded in the second pass. The first pass performs a near-lossless compression, which can be refined to be lossless by the second pass. Experimental results show a high compression performance for the proposed method when it is applied to error-diffused images.  相似文献   

2.
The authors consider 2-D predictive vector quantization (PVQ) of images subject to an entropy constraint and demonstrate the substantial performance improvements over existing unconstrained approaches. They describe a simple adaptive buffer-instrumented implementation of this 2-D entropy-coded PVQ scheme which can accommodate the associated variable-length entropy coding while completely eliminating buffer overflow/underflow problems at the expense of only a slight degradation in performance. This scheme, called 2-D PVQ/AECQ (adaptive entropy-coded quantization), is shown to result in excellent rate-distortion performance and impressive quality reconstructions of real-world images. Indeed, the real-world coding results shown demonstrate little distortion at rates as low as 0.5 b/pixel  相似文献   

3.
A near-lossless image compression scheme is presented. It is essentially a differential pulse code modulation (DPCM) system with a mechanism incorporated to minimize the entropy of the quantized prediction error sequence. With a "near-lossless" criterion of no more than a d gray-level error for each pixel, where d is a small nonnegative integer, trellises describing all allowable quantized prediction error sequences are constructed. A set of "contexts" is defined for the conditioning prediction error model and an algorithm that produces minimum entropy conditioned on the contexts is presented. Finally, experimental results are given.  相似文献   

4.
Reversible intraframe compression of medical images   总被引:3,自引:0,他引:3  
The performance of several reversible, intraframe compression methods is compared by applying them to angiographic and magnetic resonance (MR) images. Reversible data compression involves two consecutive steps: decorrelation and coding. The result of the decorrelation step is presented in terms of entropy. Because Huffman coding generally approximates these entropy measures within a few percent, coding has not been investigated separately. It appears that a hierarchical decorrelation method based on interpolation (HINT) outperforms all other methods considered. The compression ratio is around 3 for angiographic images of 8-9 b/pixel, but is considerably less for MR images whose noise level is substantially higher.  相似文献   

5.
An adaptive image-coding algorithm for compression of medical ultrasound (US) images in the wavelet domain is presented. First, it is shown that the histograms of wavelet coefficients of the subbands in the US images are heavy-tailed and can be better modelled by using the generalised Student's t-distribution. Then, by exploiting these statistics, an adaptive image coder named JTQVS-WV is designed, which unifies the two approaches to image-adaptive coding: rate-distortion (R-D) optimised quantiser selection and R-D optimal thresholding, and is based on the varying-slope quantisation strategy. The use of varying-slope quantisation strategy (instead of fixed R-D slope) allows coding of the wavelet coefficients across various scales according to their importance for the quality of reconstructed image. The experimental results show that the varying-slope quantisation strategy leads to a significant improvement in the compression performance of the JTQVS-WV over the best state-of-the-art image coder, SPIHT, JPEG2000 and the fixed-slope variant of JTQVS-WV named JTQ-WV. For example, the coding of US images at 0.5 bpp yields a peak signal-to-noise ratio gain of >0.6, 3.86 and 0.3 dB over the benchmark, SPIHT, JPEG2000 and JTQ-WV, respectively.  相似文献   

6.
Simpler versions of a previously introduced adaptive entropy-coded predictive vector quantization (PVQ) scheme where the embedded entropy constrained vector quantizer (ECVQ) is replaced by a pruned tree-structured VQ (PTSVQ) are described. The resulting encoding scheme is shown to result in drastically reduced complexity at only a small cost in performance. Coding results for selected real-world images are given  相似文献   

7.
Differentiation applied to lossless compression of medical images   总被引:1,自引:0,他引:1  
Lossless compression of medical images using a proposed differentiation technique is explored. This scheme is based on computing weighted differences between neighboring pixel values. The performance of the proposed approach, for the lossless compression of magnetic resonance (MR) images and ultrasonic images, is evaluated and compared with the lossless linear predictor and the lossless Joint Photographic Experts Group (JPEG) standard. The residue sequence of these techniques is coded using arithmetic coding. The proposed scheme yields compression measures, in terms of bits per pixel, that are comparable with or lower than those obtained using the linear predictor and the lossless JPEG standard, respectively, with 8-b medical images. The advantages of the differentiation technique presented here over the linear predictor are: 1) the coefficients of the differentiator are known by the encoder and the decoder, which eliminates the need to compute or encode these coefficients, and 21 the computational complexity is greatly reduced. These advantages are particularly attractive in real time processing for compressing and decompressing medical images.  相似文献   

8.
Differential pulse-coded modulation (DPCM) encoding of Gaussian autoregressive (AR) sequences is considered. It is pointed out that DPCM is rate-distortion inefficient at low bit rates. Simple filtering modifications are proposed and incorporated into DPCM. A rate-distortion optimization framework that results in optimal filters is presented. It is shown that the designed filters take advantage of “less significant” process spectral components in order to achieve superior rate-distortion performance. Design equations are derived, issues related to optimization and complexity addressed. It is shown that simple DPCM systems with the proposed modifications significantly outperform their standard counterparts  相似文献   

9.
Presents a fast DPCM coder built around a ROM of feasible size, in which an elementary form of tree encoding is integrated, to improve the quality of the decoded picture. Simulations illustrate the improvements of this technique in comparison with normal DPCM encoding  相似文献   

10.
A novel technique is presented to compress medical data employing two or more mutually nonorthogonal transforms. Both lossy and lossless compression implementations are considered. The signal is first resolved into subsignals such that each subsignal is compactly represented in a particular transform domain. An efficient lossy representation of the signal is achieved by superimposing the dominant coefficients corresponding to each subsignal. The residual error, which is the difference between the original signal and the reconstructed signal is properly formulated. Adaptive algorithms in conjunction with an optimization strategy are developed to minimize this error. Both two-dimensional (2-D) and three-dimensional (3-D) approaches for the technique are developed. It is shown that for a given number of retained coefficients, the discrete cosine transform (DCT)-Walsh mixed transform representation yields a more compact representation than using DCT or Walsh alone. This lossy technique is further extended for the lossless case. The coefficients are quantized and the signal is reconstructed. The resulting reconstructed signal samples are rounded to the nearest integer and the modified residual error is computed. This error is transmitted employing a lossless technique such as the Huffman coding. It is shown that for a given number of retained coefficients, the mixed transforms again produces the smaller rms-modified residual error. The first-order entropy of the error is also smaller for the mixed-transforms technique than for the DCT, thus resulting in smaller length Huffman codes.  相似文献   

11.
Presents new methods for lossless predictive coding of medical images using two dimensional multiplicative autoregressive models. Both single-resolution and multi-resolution schemes are presented. The performances of the proposed schemes are compared with those of four existing techniques. The experimental results clearly indicate that the proposed schemes achieve higher compression compared to the lossless image coding techniques considered.  相似文献   

12.
多光谱遥感图像的压缩要利用图像谱内及谱间的相关特性。该文在分析多光谱图像谱内和谱间相关特性的基础上,提出了对多光谱遥感图像进行压缩的分段 DPCM和 SPIHT相结合的混合压缩算法,即首先利用分段 DPCM算法去除谱间冗余,再利用高效的 SPIHT小波压缩算法对预测误差图像进行编码。实验取得了令人满意的效果,证明了该算法的有效性。  相似文献   

13.
The cloud radio access network (C-RAN) is not only a very important deployment solution for the future RAN but is also a core platform for network-centric advanced transmission techniques such as coordinated multi-point transmission and reception and the distributed antenna system. One of the main issues when implementing C-RAN at low cost and high efficiency is the need to reduce the implementation cost of the fronthaul and improve its usage efficiency. In order to achieve this, in this paper, near-lossless compression and decompression algorithms for digital data transported via fronthaul in C-RAN are proposed, where the compression is mainly achieved through the removal of various redundancies in wireless communication signals. Since the proposed algorithms significantly reduce the amount of data that should be transmitted via fronthaul while maintaining negligible in-band distortion in terms of error vector magnitude (EVM), we can actually reduce the number of transmission lines or enhance the utilization of them. In addition, they can be operated with a minimum compression ratio as well as a constant compression ratio; therefore, real-time processing and fronthaul data-muxing can be easily performed. Simulation results and comparisons have been carried out based on the 3rd generation partnership project long-term evolution system and the common public radio interface, which is a publicly available specification that is widely utilized to implement the fronthaul. Simulation results confirm that the proposed schemes can provide remarkable compression performance with a zero uncoded bit error rate and negligible signal distortion. Finally, the proposed schemes have various parameters that can be adjusted to meet given requirements such as latency, the compression ratio, EVM, complexity, and so on, and thus a smooth tradeoff between them can be achieved.  相似文献   

14.
This paper addresses the problem of electrocardiogram (ECG) signal compression with the goal to provide a simple compression method that outperforms previously proposed methods. Starting with the study of the ECG signal nature, the manner has been found to optimize rate-quality ratio of the ECG signal by means of differential pulse code modulation (DPCM) and subframe after subframe procession. Particularly, the proposed method includes two kinds of adaptations, short-time and long-time adaptations. The switched quantization i.e. the short-time DPCM quantizer range adaptation is performed according to the statistics of the ECG signal within particular subframes. It is ascertained that the short-time adaptation enables a sophisticated compression control as well as a constant quality of the ECG signal in both segments of low amplitude and high amplitude dynamics. In addition, by grouping the subframes of a particular frame into two groups according to their dynamics and performing the long-time DPCM quantizer range adaptation, based on the statistics of the groups, it has been revealed that an important quality gain is achieved with an insignificant rate increase. Moreover, the two iterative approaches proposed in the paper, mainly differ in the fact whether the long-time range adaptations of the used DPCM quantizers are performed according to the maximum amplitudes or according to the average powers of the signal difference determined in all subframes within a certain group. The benefits of both approaches to the above proposed method are shown and discussed in the paper.  相似文献   

15.
A new efficient method, based on a differential PCM technique, for encoding the leaves of a pruned quadtree is presented. Solutions to the problems of causality maintenance during tree scanning-encoding and of fast neighbour finding are described. The result is a constant wordlength code with a remarkable rate-distortion performance  相似文献   

16.
We investigate the ability to derive meaningful information from decompressed imaging spectrometer data. Hyperspectral images are compressed with near-lossless and lossy coding methods. Linear prediction between the bands is used in both cases. Each band is predicted by a previously transmitted band. The residual is formed by subtracting the prediction from the original data and then is compressed either with a near-lossless bit-plane coder or with the lossy JPEG2000 algorithm. We study the effects of these two types of compression on hyperspectral image processing such as mineral and vegetation content classification using whole- and mixed pixel analysis techniques. The results presented in this paper indicate that an efficient lossy coder outperforms near-lossless method in terms of its impact on final hyperspectral data applications.  相似文献   

17.
The authors investigate whether data representing medical image sequences can be compressed more efficiently by taking into account the temporal correlation between the sequence frames. The standard of comparison is intraframe HINT, the best-known reversible decorrelation method for 2-D images. In interframe decorrelation, a distinction is made between extrapolation- and interpolation-based methods, and methods based on local motion estimation, block motion estimation, and unregistered decorrelation. These distinctions give six classes of interframe decorrelation methods, all of which are described. The methods are evaluated by applying them to sequences of coronary X-ray angiograms, ventricle angiograms, and liver scintigrams, as well as to a (nonmedical) videoconferencing image sequence. For the medical image sequences: (1) interpolation-based methods are superior to extrapolation-based methods; (2) estimation of interframe motion is not advantageous for image compression; (3) interframe compression yields entropies comparable to intraframe HINT at higher computational costs; and (4) two methods, unregistered extrapolation and interpolation, are nonetheless possibly interesting alternatives to intraframe HINT.  相似文献   

18.
The authors investigate the use of conditioning events (or contexts) in improving the performances of known compression methods by building a source model with multiple contexts to code the decorrelated pixels. Three methods for reversible compression, namely DPCM (differential pulse code modulation), WHT (Walsh-Hadamard transform), and HINT (hierarchical interpolation), employing, respectively, predictive decorrelation, transform decorrelation, and multiresolution decorrelation, are considered. It is shown that the performance of these methods can be enhanced significantly, sometimes even up to 40%, by using contexts. The enhanced DPCM method is found to perform the best for MR and UT (ultrasound) medical images; the enhanced WHT method is found to be the best for X-ray images. The source models used in the enhanced models employ several hundred contexts.  相似文献   

19.
A novel technique for despeckling the medical ultrasound images using lossy compression is presented. The logarithm of the input image is first transformed to the multiscale wavelet domain. It is then shown that the subband coefficients of the log-transformed ultrasound image can be successfully modeled using the generalized Laplacian distribution. Based on this modeling, a simple adaptation of the zero-zone and reconstruction levels of the uniform threshold quantizer is proposed in order to achieve simultaneous despeckling and quantization. This adaptation is based on: (1) an estimate of the corrupting speckle noise level in the image; (2) the estimated statistics of the noise-free subband coefficients; and (3) the required compression rate. The Laplacian distribution is considered as a special case of the generalized Laplacian distribution and its efficacy is demonstrated for the problem under consideration. Context-based classification is also applied to the noisy coefficients to enhance the performance of the subband coder. Simulation results using a contrast detail phantom image and several real ultrasound images are presented. To validate the performance of the proposed scheme, comparison with two two-stage schemes, wherein the speckled image is first filtered and then compressed using the state-of-the-art JPEG2000 encoder, is presented. Experimental results show that the proposed scheme works better, both in terms of the signal to noise ratio and the visual quality.  相似文献   

20.
An algorithm is described for filtering streak noise in corrupted DPCM (differential pulse code modulated) images. In this scheme, the corrupted line is detected using the Wilcoxon-Mann-Whitney nonparametric test. This test mechanism requires no PCM-update multiplexing. The corrupted line is then filtered using a likelihood ratio test and data from the previous line. Typical experimental results are provided  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号