首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Image compression systems that exploit the properties of the human visual system have been studied extensively over the past few decades. For the JPEG2000 image compression standard, all previous methods that aim to optimize perceptual quality have considered the irreversible pipeline of the standard. In this work, we propose an approach for the reversible pipeline of the JPEG2000 standard. We introduce a new methodology to measure visibility of quantization errors when reversible color and wavelet transforms are employed. Incorporation of the visibility thresholds using this methodology into a JPEG2000 encoder enables creation of scalable codestreams that can provide both near-threshold and numerically lossless representations, which is desirable in applications where restoration of original image samples is required. Most importantly, this is the first work that quantifies the bitrate penalty incurred by the reversible transforms in near-threshold image compression compared to the irreversible transforms.  相似文献   

2.
We propose a novel symmetry-based technique for scalable lossless compression of 3D medical image data. The proposed method employs the 2D integer wavelet transform to decorrelate the data and an intraband prediction method to reduce the energy of the sub-bands by exploiting the anatomical symmetries typically present in structural medical images. A modified version of the embedded block coder with optimized truncation (EBCOT), tailored according to the characteristics of the data, encodes the residual data generated after prediction to provide resolution and quality scalability. Performance evaluations on a wide range of real 3D medical images show an average improvement of 15% in lossless compression ratios when compared to other state-of-the art lossless compression methods that also provide resolution and quality scalability including 3D-JPEG2000, JPEG2000, and H.264/AVC intra-coding.   相似文献   

3.
Current gene-expression microarrays carry enormous amounts of information. Compression is necessary for efficient distribution and storage. This paper examines JPEG2000 compression of cDNA microarray images and addresses the accuracy of classification and feature selection based on decompressed images. Among other options, we choose JPEG2000 because it is the latest international standard for image compression and offers lossy-to-lossless compression while achieving high lossless compression ratios on microarray images. The performance of JPEG2000 has been tested on three real data sets at different compression ratios, ranging from lossless to 45:1. The effects of JPEG2000 compression/decompression on differential expression detection and phenotype classification have been examined. There is less than a 4% change in differential detection at compression rates as high as 20:1, with detection accuracy suffering less than 2% for moderate to high intensity genes, and there is no significant effect on classification at rates as high as 35:1. The supplementary material is available at .  相似文献   

4.
The use of microarray expression data in state-of-the-art biology has been well established. The widespread adoption of this technology, coupled with the significant volume of data generated per experiment, in the form of images, has led to significant challenges in storage and query retrieval. In this paper, we present a lossless bitplane-based method for efficient compression of microarray images. This method is based on arithmetic coding driven by image-dependent multibitplane finite-context models. It produces an embedded bitstream that allows progressive, lossy-to-lossless decoding. We compare the compression efficiency of the proposed method with three image compression standards (JPEG2000, JPEG-LS, and JBIG) and also with the two most recent specialized methods for microarray image coding. The proposed method gives better results for all images of the test sets and confirms the effectiveness of bitplane-based methods and finite-context modeling for the lossless compression of microarray images.   相似文献   

5.
一种基于小波系数上下文模型的图像压缩方法   总被引:2,自引:3,他引:2  
提出了二种新颖的基于小波系数上下文模型的图像压缩方法,该方法通过量化当前系数的线性预测值形成上下文.进行自适应的算术编码;同时利用了小波变换的多分辨率性质,以渐近分辨率的方式压缩图片,具有分辨率可扩展性。实验结果表明,该方法获得的无损压缩比高于SPIHT和用于JPEG2000的EBCOT,在各分辨率下的压缩比也高于EBCOT.压缩时间也比EBCOT要少。  相似文献   

6.
The interest in methods that are able to efficiently compress microarray images is relatively new. This is not surprising, since the appearance and fast growth of the technology responsible for producing these images is also quite recent. In this paper, we present a set of compression results obtained with 49 publicly available images, using three image coding standards: lossless JPEG2000, JBIG, and JPEG-LS. We concluded that the compression technology behind JBIG seems to be the one that offers the best combination of compression efficiency and flexibility for microarray image compression.  相似文献   

7.
Region-based wavelet coding methods for digital mammography   总被引:6,自引:0,他引:6  
Spatial resolution and contrast sensitivity requirements for some types of medical image techniques, including mammography, delay the implementation of new digital technologies, namely, computer-aided diagnosis, picture archiving and communications systems, or teleradiology. In order to reduce transmission time and storage cost, an efficient data-compression scheme to reduce digital data without significant degradation of medical image quality is needed. In this study, we have applied two region-based compression methods to digital mammograms. In both methods, after segmenting the breast region, a region-based discrete wavelet transform is applied, followed by an object-based extension of the set partitioning in hierarchical trees (OB-SPIHT) coding algorithm in one method, and an object-based extension of the set partitioned embedded block (OB-SPECK) coding algorithm in the other. We have compared these specific implementations against the original SPIHT and the new standard JPEG 2000, both using reversible and irreversible filters, on five digital mammograms compressed at rates ranging from 0.1 to 1.0 bit per pixel (bbp). Distortion was evaluated for all images and compression rates by the peak signal-to-noise ratio. For all images, OB-SPIHT and OB-SPECK performed substantially better than the traditional SPIHT and JPEG 2000, and a slight difference in performance was found between them. A comparison applying SPIHT and the standard JPEG 2000 to the same set of images with the background pixels fixed to zero was also carried out, obtaining similar implementation as region-based methods. For digital mammography, region-based compression methods represent an improvement in compression efficiency from full-image methods, also providing the possibility of encoding multiple regions of interest independently.  相似文献   

8.
Dictionary design for text image compression with JBIG2   总被引:1,自引:0,他引:1  
The JBIG2 standard for lossy and lossless bilevel image coding is a very flexible encoding strategy based on pattern matching techniques. This paper addresses the problem of compressing text images with JBIG2. For text image compression, JBIG2 allows two encoding strategies: SPM and PM&S. We compare in detail the lossless and lossy coding performance using the SPM-based and PM&S-based JBIG2, including their coding efficiency, reconstructed image quality and system complexity. For the SPM-based JBIG2, we discuss the bit rate tradeoff associated with symbol dictionary design. We propose two symbol dictionary design techniques: the class-based and tree-based techniques. Experiments show that the SPM-based JBIG2 is a more efficient lossless system, leading to 8% higher compression ratios on average. It also provides better control over the reconstructed image quality in lossy compression. However, SPM's advantages come at the price of higher encoder complexity. The proposed class-based and tree-based symbol dictionary designs outperform simpler dictionary formation techniques by 8% for lossless and 16-18% for lossy compression  相似文献   

9.
结合医学图像压缩的特殊性,有针对性地研究了几种典型的医学图像压缩方法,包括有损的JPEG系列、无损的JPEG2000系列,以及模板匹配压缩方法。并用MATLAB分别对每种方法进行了仿真。通过实验结果的对比,分别指出这些方法各自优、缺点。最后综合各种方法,提出了一种基于感兴趣区域(ROI)的模板压缩法。  相似文献   

10.
JPEG2000: standard for interactive imaging   总被引:4,自引:0,他引:4  
JPEG2000 is the latest image compression standard to emerge from the Joint Photographic Experts Group (JPEG) working under the auspices of the International Standards Organization. Although the new standard does offer superior compression performance to JPEG, JPEG2000 provides a whole new way of interacting with compressed imagery in a scalable and interoperable fashion. This paper provides a tutorial-style review of the new standard, explaining the technology on which it is based and drawing comparisons with JPEG and other compression standards. The paper also describes new work, exploiting the capabilities of JPEG2000 in client-server systems for efficient interactive browsing of images over the Internet.  相似文献   

11.
JPEG2000的静止图像压缩算法具有很多优良特性,其核心算法是EBCOT,构造复杂、硬件实现难度大。而基于SPIHT算法的图像压缩效率接近EBCOT,但结构简单、易于硬件实现。通过对SPIHT和JPEG2000算法进行融合,提出了一套压缩比高、可实现由有损到无损、码流渐进传输的静止图像编码方案,并对SPIHT算法与(9,7)小波提升算法的融合方法进行分析研究,所构造系统性能与JPEG2000算法接近,具有较好的应用前景。  相似文献   

12.
Down-scaling for better transform compression   总被引:1,自引:0,他引:1  
The most popular lossy image compression method used on the Internet is the JPEG standard. JPEG's good compression performance and low computational and memory complexity make it an attractive method for natural image compression. Nevertheless, as we go to low bit rates that imply lower quality, JPEG introduces disturbing artifacts. It is known that, at low bit rates, a down-sampled image, when JPEG compressed, visually beats the high resolution image compressed via JPEG to be represented by the same number of bits. Motivated by this idea, we show how down-sampling an image to a low resolution, then using JPEG at the lower resolution, and subsequently interpolating the result to the original resolution can improve the overall PSNR performance of the compression process. We give an analytical model and a numerical analysis of the down-sampling, compression and up-sampling process, that makes explicit the possible quality/compression trade-offs. We show that the image auto-correlation can provide a good estimate for establishing the down-sampling factor that achieves optimal performance. Given a specific budget of bits, we determine the down-sampling factor necessary to get the best possible recovered image in terms of PSNR.  相似文献   

13.
Most full-reference fidelity/quality metrics compare the original image to a distorted image at the same resolution assuming a fixed viewing condition. However, in many applications, such as video streaming, due to the diversity of channel capacities and display devices, the viewing distance and the spatiotemporal resolution of the displayed signal may be adapted in order to optimize the perceived signal quality. For example, at low bitrate coding applications an observer may prefer to reduce the resolution or increase the viewing distance to reduce the visibility of the compression artifacts. The tradeoff between resolution/viewing conditions and visibility of compression artifacts requires new approaches for the evaluation of image quality that account for both image distortions and image size. In order to better understand such tradeoffs, we conducted subjective tests using two representative still image coders, JPEG and JPEG 2000. Our results indicate that an observer would indeed prefer a lower spatial resolution (at a fixed viewing distance) in order to reduce the visibility of the compression artifacts, but not all the way to the point where the artifacts are completely invisible. Moreover, the observer is willing to accept more artifacts as the image size decreases. The subjective test results we report can be used to select viewing conditions for coding applications. They also set the stage for the development of novel fidelity metrics. The focus of this paper is on still images, but it is expected that similar tradeoffs apply to video.   相似文献   

14.
Recent advances in imaging technology make it possible to obtain remotely sensed imagery data of the Earth at high spatial, spectral, and radiometric resolutions. The rate at which the data is collected from these satellites can far exceed the channel capacity of the data downlink. Reducing the data rate to within the channel capacity can often require painful trade-offs in which certain scientific returns are sacrificed for the sake of others. The authors focus on the case where radiometric resolution is sacrificed by dropping a specified number of lower order bits (LOB) from each data pixel. To limit the number LOBs dropped, they also compress the remaining bits using lossless compression. They call this approach “truncation followed by lossless compression” or TLLC. They then demonstrate the suboptimality of this TLLC approach by comparing it with the direct application of a more effective lossy compression technique based on the JPEG algorithm. This comparison demonstrates that, for a given channel rate, the method based on JPEG lossy compression better preserves radiometric resolution than does TLLC  相似文献   

15.
Reversible integer wavelet transforms are increasingly popular in lossless image compression, as evidenced by their use in the recently developed JPEG2000 image coding standard. In this paper, a projection-based technique is presented for decreasing the first-order entropy of transform coefficients and improving the lossless compression performance of reversible integer wavelet transforms. The projection technique is developed and used to predict a wavelet transform coefficient as a linear combination of other wavelet transform coefficients. It yields optimal fixed prediction steps for lifting-based wavelet transforms and unifies many wavelet-based lossless image compression results found in the literature. Additionally, the projection technique is used in an adaptive prediction scheme that varies the final prediction step of the lifting-based transform based on a modeling context. Compared to current fixed and adaptive lifting-based transforms, the projection technique produces improved reversible integer wavelet transforms with superior lossless compression performance. It also provides a generalized framework that explains and unifies many previous results in wavelet-based lossless image compression.  相似文献   

16.
纪强  石文轩  田茂  常帅 《红外与激光工程》2016,45(2):228004-0228004(7)
鉴于卫星拍摄的遥感图像的空间分辨率和光谱分辨率越来越高,在一些应用中,常会对多光谱图像进行压缩。为了提高多光谱图像的压缩质量,提出了联合相位相关和仿射变换的图像配准方法,有效提高了图像谱段之间的相关性。针对多光谱图像压缩,提出了结合Karhunen-Love,KL变换去除谱间相关和嵌入式二维小波编码方法。相比JPEG2000谱段图像独立压缩方法,提出方法解压图像的Peak Signal to Noise Ratio,PSNR值平均提高2.1 dB。实验结果表明:所提出的方法能在相同的压缩率下获得比JPEG2000谱段图像独立压缩方法更好的图像质量。  相似文献   

17.
星载超光谱遥感图像编码系统要求具有一定压缩比、无失真、低复杂度的特点.更为重要的是要求图像压缩算法应当尽可能减少码流比特错误造成的错误扩散,根据这些特点,本文提出一种易于硬件实现、低复杂度、抗误码强的变长游程编码算法,能够满足这类图像的压缩要求,同时将码流比特错误的扩散限制在有限几列,从而大大减少了地面像素信息的损失.实验结果表明,本算法具有很好的抗误码性能,压缩效率比JPEG-LS和JPEG2000略有下降.因此本算法非常适合星载超光谱图像压缩,也可用于其它基于无线传输的无损图像压缩.  相似文献   

18.
Lossless image compression with multiscale segmentation   总被引:1,自引:0,他引:1  
  相似文献   

19.
JPEG2000 is known as an efficient standard to encode images. However, at very low bit-rates, artifacts or distortions can be observed in decoded images. In order to improve the visual quality of decoded images and make them perceptually acceptable, we propose in this work a new preprocessing scheme. This scheme consists in preprocessing the image to be encoded using a nonlinear filtering, considered as a prior phase to JPEG 2000 compression. More specifically, the input image is decomposed into low- and high-frequency sub-images using morphological filtering. Afterward, each sub-image is compressed using JPEG2000, by assigning different bit-rates to each sub-image. To evaluate the quality of the reconstructed image, two different metrics have been used, namely (a) peak signal to noise ratio, to evaluate the visual quality of the low-frequency sub-image, and (b) structural similarity index measure, to evaluate the visual quality of the high-frequency sub-image. Based on the reconstructed images, experimental results show that, at low bit-rates, the proposed scheme provides better visual quality compared to a direct use of JPEG2000 (excluding any preprocessing).  相似文献   

20.
JPEG2000是国际标准化组织(ISO)指定的新一代静止图像压缩标准。它作为JPEG的升级版,向下兼容,具有比JPEG更高的压缩率,同时具有一些新的特征。这里主要介绍了JPEG2000中的核心算法EBCOT(embedded block coding with optimized truncation)—基于优化截断的嵌入式块编码算法。以此阐述了JPEG2000压缩标准新的特征以及与现有压缩标准相比显示出来的优越性能。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号