首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 93 毫秒
1.
基于OpenMP的JPEG2000图像并行编码算法   总被引:1,自引:1,他引:0  
张娜  邓家先  黄艳 《通信技术》2011,44(4):21-24
JPEG2000是新一代图像压缩标准,具有编码效率高、性能好等优点,由于采用小波和比特平面编码技术,其编码复杂度高,编码速度较慢。为了提高JPEG2000的编码速度,提出一种基于OpenMP的JPEG2000图像并行编码算法,通过对离散小波变换和EBCOT算法的并行处理,提高编码速度。结果表明,该算法在保持了JPEG2000良好特性的基础上,大大提高了编码速度,而且图像越大,对编码速度的改善越明显,使JPEG2000更加适用于大数据量的图像的传输。  相似文献   

2.
基于小波域的JPEG2000压缩编码算法,对重建图像造成的失真是结构信息的丢失,从而使人眼的图像感知质量下降。为了解决这个问题,该文提出一种基于人眼感知预测的JPEG2000码率控制算法(SIRA),首先提出了一种度量图像感知质量下降的参数模型,然后建立了一个能在编码之前预测JPEG2000压缩图像感知质量下降的单端预测模型,基于预测模型实现了JPEG2000标准的码率分配。仿真实验结果验证了模型的正确性及算法的有效性。  相似文献   

3.
针对通用的有损压缩算法进行文档抓图压缩时存在的缺点,提出了一种面向文档抓图的实用、高效压缩算法。算法基于图像内容的分析和图像特征的提取,结合灰度变换与行程编码-RLE(Run-Length Encoding)思想,对图像和文字采用不同的压缩策略,保证压缩质量的同时提高了压缩倍率。实验结果证明了在同等压缩倍率下,该算法的压缩效果远强于JPEG和JPEG2000。  相似文献   

4.
基于JPEG2000标准的ROI编码研究   总被引:1,自引:0,他引:1  
JPEG2000是由ISO/ITU-T最近开发的图像和视频压缩标准,它的Part Ⅰ(核心编码算法)已经于2000年12月份正式成为了国际标准.进一步的工作正在进行,裁剪标准以适应特殊的应用,例如医学图像和视频编码.JPEG2000在单一的位流中提供了多种功能.本文主要研究的是它的感兴趣区域(ROI)编码的特点,也就是在一幅图像中用比背景更多的细节来编码图像中的ROI区域.接着概述了JPEG2000中的三种有效的ROI编码方法,它们分别以不同的空间质量来编码解码图像:叠块、编码块的选择和系数位移,并且从经验上研究了它们的编码性能.  相似文献   

5.
彩色表态图像的新一代编码方式“JPEG2000”的编码算法已经确定。在2000年3月的东京会议上,规定基本编码系统的最终协议草案已获发行,最终标准的推出预计在2000年12月。JPEG2000作为JPEG升级版,高压缩(低比特速率)是其目标,与JPEG相比,可同的情况下,JPEG2000信噪比将比JPEG提高30%左右,同时JPEG2000还拥有5种2值图像采用JBIG,低压缩率采用则将上术方式统一起来,成为对应各种图像的通用编码方式。JPEG2000编码算法采用离散式变DWT)和bitplain算术编码(Mqcoder)等构成,而JPEG则为DCT和Qmcoder。另外此次…  相似文献   

6.
JPEG2000中的EBCOT算法简介   总被引:9,自引:0,他引:9  
杨方洲  刘锐 《信息技术》2004,28(5):38-41
JPEG2000标准的核心算法是EBCOT,它不仅能实现对图像的有效压缩,同时产生的码流具有分辨率可伸缩性、信噪比可伸缩性、随机访问和处理等非常好的特性。本文介绍了E BCOT编码算法的主要过程。  相似文献   

7.
基于JPEG2000采用的最优截断嵌入式块编码的多通道扫描特性,提出了分数比特面提升的感兴趣区域(region ofinterest,ROI)编码算法,与现有基于比特面提升的同类算法相比,新算法不但可以更为精细地控制ROI图像质量,同时还支持多ROI编码和交互式ROI.仿真结果表明新算法对ROI图像质量的控制效果显著.  相似文献   

8.
为解决空间遥感图像数据量及信道带宽之间的矛盾,该文提出一种基于JPEG2000的感兴趣区域(Region Of Interest, ROI)编码算法。主流的JPEG2000 ROI编码算法难以兼顾ROI质量和系统计算量,且在低码率编码时有完全丢失背景的隐患。该算法通过精确控制各子带中背景系数的精度,使ROI分配到更多码流。并引入了人眼视觉特性,使较少的背景码流产生尽量好的视觉效果。另外,根据该算法提出了针对矩形ROI的超大规模集成电路(VLSI)设计,此设计经过简单调整,亦可适用于主流的ROI编码算法。测试结果表明,该算法在ROI质量和重建图像视觉效果上均表现优异,且支持任意形状ROI编码,兼容JPEG2000协议。该VLSI设计仅使JPEG2000系统运行时间增加一个周期,具有极高的吞吐率,可满足实时处理要求。  相似文献   

9.
陈鹏  刘晨 《通讯世界》2017,(15):129-130
针对输电线路的特点,本文分析了现有图像/视频传输线监测研究的关键技术,提出了一种基于可伸缩视频编码(SVC)的高清图像和低分辨率视频统一的编码技术,以降低监测系统视频和高清图像的复杂性,通过一套系统可以同时实现低分辨率的视频和高清图像编码.本文介绍了该系统的设计方案和算法原理,并对算法的性能进行了分析.分析结果表明,在相同的比特率,图像的PSNR性能比传统的JPEG图像压缩方案提高2~3dB.  相似文献   

10.
韩超  吴乐华 《通信技术》2010,43(1):131-133
JPEG是目前应用最广泛的静止图像压缩标准,将JPEG功能多样化具有重要意义。利用DCT变换比小波变换更加简单、快速的特点,结合SPIHT算法细化过程原理,实现了一种与JPEG标准兼容的ROI编码算法,既不增加编码时间,又可以实现ROI的渐进式增强传输,而且在图像重建质量稳定性和可控性方面要远远好于区域自适应量化编码,在较低码率下,仅增加少量码流就能够明显改善主观视觉感受。  相似文献   

11.
In this paper, we present a comprehensive approach for investigating JPEG compressed test images, suspected of being tampered either by splicing or copy-move forgery (cmf). In JPEG compression, the image plane is divided into non-overlapping blocks of size 8 × 8 pixels. A unified approach based on block-processing of JPEG image is proposed to identify whether the image is authentic/forged and subsequently localize the tampered region in forged images. In the initial step, doubly stochastic model (dsm) of block-wise quantized discrete cosine transform (DCT) coefficients is exploited to segregate authentic and forged JPEG images from a standard dataset (CASIA). The scheme is capable of identifying both the types of forged images, spliced as well as copy-moved. Once the presence of tampering is detected, the next step is to localize the forged region according to the type of forgery. In case of spliced JPEG images, the tampered region is localized using block-wise correlation maps of dequantized DCT coefficients and its recompressed version at different quality factors. The scheme is able to identify the spliced region in images tampered by pasting uncompressed or JPEG image patch on a JPEG image or vice versa, with all possible combinations of quality factors. Alternatively, in the case of copy-move forgery, the duplication regions are identified using highly localized phase congruency features of each block. Experimental results are presented to consolidate the theoretical concept of the proposed technique and the performance is compared with the already existing state of art methods.  相似文献   

12.
In this paper, we propose an algorithm for evaluating the quality of JPEG compressed images, called the psychovisually based image quality evaluator (PIQE), which measures the severity of artifacts produced by JPEG compression. The PIQE evaluates the image quality using two psychovisually based fidelity criteria: blockiness and similarity. The blockiness is an index that measures the patterned square artifact created as a by-product of the lossy DCT-based compression technique used by JPEG and MPEG. The similarity measures the perceivable detail remaining after compression. The blockiness and similarity are combined into a single PIQE index used to assess quality. The PIQE model is tuned by using subjective assessment results of five subjects evaluating six sets of images. To demonstrate the robustness of the model, a set of validation experiments is conducted by repeating the subject assessment procedure with four new subjects evaluating five new image sets. The PIQE model is most accurate when the JPEG quantization factor is in the range for which JPEG compression is most effective.  相似文献   

13.
In this paper, we address issues concerning bilevel image compression using JPEG2000. While JPEG2000 is designed to compress both bilevel and continuous tone image data using a single unified framework, there exist significant limitations with respect to its use in the lossless compression of bilevel imagery. In particular, substantial degradation in image quality at low resolutions severely limits the resolution scalable features of the JPEG2000 code-stream. We examine these effects and present two efficient methods to improve resolution scalability for bilevel imagery in JPEG2000. By analyzing the sequence of rounding operations performed in the JPEG2000 lossless compression pathway, we introduce a simple pixel assignment scheme that improves image quality for commonly occurring types of bilevel imagery. Additionally, we develop a more general strategy based on the JPIP protocol, which enables efficient interactive access of compressed bilevel imagery. It may be noted that both proposed methods are fully compliant with Part 1 of the JPEG2000 standard.   相似文献   

14.
新一代静止图像编码系统-JPEG2000   总被引:7,自引:0,他引:7  
JPEG2000是最新制定的静止图像编码国际标准,JPEG2000不仅提供的率失真性能和主观图像质量优于原JPEG标准,而且在支持渐进图像传输、感兴趣区图像编码和抗误码性能上也优于传统的JPEG标准和其它编码方法。本文对JPEG2000系统的结构、特性以及其编码算法进行了分析,并给出JPEG2000与原JPEG标准的性能比较。  相似文献   

15.
Down-scaling for better transform compression   总被引:1,自引:0,他引:1  
The most popular lossy image compression method used on the Internet is the JPEG standard. JPEG's good compression performance and low computational and memory complexity make it an attractive method for natural image compression. Nevertheless, as we go to low bit rates that imply lower quality, JPEG introduces disturbing artifacts. It is known that, at low bit rates, a down-sampled image, when JPEG compressed, visually beats the high resolution image compressed via JPEG to be represented by the same number of bits. Motivated by this idea, we show how down-sampling an image to a low resolution, then using JPEG at the lower resolution, and subsequently interpolating the result to the original resolution can improve the overall PSNR performance of the compression process. We give an analytical model and a numerical analysis of the down-sampling, compression and up-sampling process, that makes explicit the possible quality/compression trade-offs. We show that the image auto-correlation can provide a good estimate for establishing the down-sampling factor that achieves optimal performance. Given a specific budget of bits, we determine the down-sampling factor necessary to get the best possible recovered image in terms of PSNR.  相似文献   

16.
Fast JPEG 2000 decoder and its use in medical imaging   总被引:3,自引:0,他引:3  
Over the last decade, a picture archiving and communications system (PACS) has been accepted by an increasing number of clinical organizations. Today, PACS is considered as an essential image management and productivity enhancement tool. Image compression could further increase the attractiveness of PACS by reducing the time and cost in image transmission and storage as long as 1) image quality is not degraded and 2) compression and decompression can be done fast and inexpensively. Compared to JPEG, JPEG 2000 is a new image compression standard that has been designed to provide improved image quality at the expense of increased computation. Typically, the decompression time has a direct impact on the overall response time taken to display images after they are requested by the radiologist or referring clinician. We present a fast JPEG 2000 decoder running on a low-cost programmable processor. It can decode a losslessly compressed 2048/spl times/2048 CR image in 1.51 s. Using this kind of a decoder, performing JPEG 2000 decompression at the PACS display workstation right before images are displayed becomes viable. A response time of 2 s can be met with an effective transmission throughput between the central short-term archive and the workstation of 4.48 Mb/s in case of CT studies and 20.2 Mb/s for CR studies. We have found that JPEG 2000 decompression at the workstation is advantageous in that the desired response time can be obtained with slower communication channels compared to transmission of uncompressed images.  相似文献   

17.
JPEG2000是ISO/ITU-T既推出JPEG静态图像压缩标准后推出的有一种高效能图像压缩标准。JPEG2000相比JPEG压缩算法,具有更高的压缩比,更高的图像压缩质量,其特有的具有图像多分辨率的特点,使其适合多种应用场合。在对JPEG2000压缩标准作了简介后,针对ADV202芯片提出了JPEG2000压缩解压缩实现方案。该实现方案,具有体积小、功耗低、成本低、调试简单的特点。  相似文献   

18.
Sometimes image processing units inherit images in raster bitmap format only, so that processing is to be carried without knowledge of past operations that may compromise image quality (e.g., compression). To carry further processing, it is useful to not only know whether the image has been previously JPEG compressed, but to learn what quantization table was used. This is the case, for example, if one wants to remove JPEG artifacts or for JPEG re-compression. In this paper, a fast and efficient method is provided to determine whether an image has been previously JPEG compressed. After detecting a compression signature, we estimate compression parameters. Specifically, we developed a method for the maximum likelihood estimation of JPEG quantization steps. The quantizer estimation method is very robust so that only sporadically an estimated quantizer step size is off, and when so, it is by one value.  相似文献   

19.
The JPEG standard is one of the most prevalent image compression schemes in use today. While JPEG was designed for use with natural images, it is also widely used for the encoding of raster documents. Unfortunately, JPEG's characteristic blocking and ringing artifacts can severely degrade the quality of text and graphics in complex documents. We propose a JPEG decompression algorithm which is designed to produce substantially higher quality images from the same standard JPEG encodings. The method works by incorporating a document image model into the decoding process which accounts for the wide variety of content in modern complex color documents. The method works by first segmenting the JPEG encoded document into regions corresponding to background, text, and picture content. The regions corresponding to text and background are then decoded using maximum a posteriori (MAP) estimation. Most importantly, the MAP reconstruction of the text regions uses a model which accounts for the spatial characteristics of text and graphics. Our experimental comparisons to the baseline JPEG decoding as well as to three other decoding schemes, demonstrate that our method substantially improves the quality of decoded images, both visually and as measured by PSNR.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号