首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 125 毫秒
1.
基于ROI的分形图像压缩编码   总被引:1,自引:0,他引:1  
吴红梅  陈继荣 《计算机仿真》2006,23(10):206-208
分形图像压缩由于具有非常高的压缩比越来越受人们的关注,但是高压缩比的图像方块效应非常明显,这在很大程度上影响了解码图像的质量。为了解决这个问题,借鉴JPEG2000中提出的ROI概念,提出了将感兴趣区域(ROI)图像编码与分形图像压缩编码相结合的图像编码方法,使得重构的图像中感兴趣区域的保真度高于背景区。该方法很好地解决了图像的压缩比和重构图像质量之间的矛盾。实验结果证明:此方法在获得较高压缩比的同时,压缩编码时间大为降低,解码图像质量也有较大的改善,且总体编码性能优于JPEG编码。  相似文献   

2.
介绍JPEG2000感兴趣区域编码技术的基本原理和编码方法,比较一般移位法和最大移位法,提出一种新的基于JPEG2000的任意形状感兴趣区域(ROI)编码方法GPBShift,它结合了两种标准ROI编码算法的优点.  相似文献   

3.
一种基于JPEG2000的医学图像感兴趣区域编码技术   总被引:1,自引:1,他引:0  
感兴趣区域编码是静态图像压缩标准JPEG2000支持的一种非常重要的功能,该标准提供了两种感兴趣区域编码的方法,这两种方法用于医学图像的处理却存在不同的不足之处。该文结合这两种编码方法各自的优点,提出了一种分步位移的处理方法,该方法不仅支持多个感兴趣区域的编码,而且还能自由调整感兴趣区与背景区图像的压缩质量。实验结果表明,医学图像的压缩性能得到了明显的改善。  相似文献   

4.
感兴趣区域(ROI)编码和渐进解码都是JPEG2000的重要特性。当前的ROI编码算法不能控制渐进解码图像的相对质量或是计算量太大。基于位面组移动法和分段的思想,该文提出了位面群移动法,使位面的排列更加灵活,不同码率的解码图像相对质量更加稳定。该方法仅增加少量数据开销,计算复杂度也没有明显增加,实现比较简单。  相似文献   

5.
提出一个基于JPEG2000感兴趣区域(ROI)的自适应水印算法。该算法结合了感兴趣区域的编码特点及HVS特性,在量化的ROI区域中筛选高频系数实现水印嵌入,使嵌入图保持良好的视觉效果,也使嵌入的水印信息在经历了JPEG2000压缩后仍具有较强的鲁棒性。  相似文献   

6.
一种基于EBCOT的感兴趣区图像编码算法   总被引:1,自引:1,他引:0  
优化截断嵌入式编码(Embedded block coding with optimized truncation, EBCOT)是JPEG 2000的核心, EBCOT所采用的基于码块的率失真优化方式为实现图像感兴趣区(Region of interest, ROI)编码提供了良好的基础. 本文分析了其中具有代表性的隐式ROI 编码算法, 并提出了一种改进方法. 通过构造加权函数, 合理地为ROI码块分配权重, 在保证ROI信息被优先编码的同时, 降低ROI码块中背景区域小波系数的影响, 提高了重建图像ROI的质量. 实验结果表明, 算法在低码率下重建图像ROI质量提高明显, 在高码率下也能够很好兼顾重建图像背景区域的质量.  相似文献   

7.
感兴趣区域图像编码技术是数字图像压缩编码领域的研究热点。文中提出了一种基于JPEG2000标准的率失真斜率优化和系数移位相结合的感兴趣编码改进方法,它结合了两者编码方式的优势,并在率失真斜率优化算法中充分考虑到ROI区域的比例并影响率失真函数的加权值。该方法提高了图像编码的信噪比,支持多个感兴趣区域,并且ROI的形状不受限制。实验证明:该方法不仅在低码率条件下提高了感兴趣区域的信噪比,在高码率下整幅图像的信噪比也得到了改善。  相似文献   

8.
基于提升小波变换的感兴趣区域编码技术是静态图像压缩标准JPEG2000支持的一种非常重要的功能,本文提出了一种对医学图像病灶(感兴趣)区域采用5/3提升小波变换,对图像背景(非感兴趣)区域采用9/7提升小波变换的方法,该方法与传统JPEG2000采用单一提升小波变换对医学进行压缩后的效果相比较,具有更高的压缩性能.  相似文献   

9.
感兴趣区域图像编码技术是数字图像压缩编码领域的研究热点。文中提出了一种基于JPEG2000标准的率失真斜率优化和系数移位相结合的感兴趣编码改进方法,它结合了两者编码方式的优势,并在率失真斜率优化算法中充分考虑到ROI区域的比例并影响率失真函数的加权值。该方法提高了图像编码的信噪比,支持多个感兴趣区域,并且ROI的形状不受限制。实验证明:该方法不仅在低码率条件下提高了感兴趣区域的信噪比,在高码率下整幅图像的信噪比也得到了改善。  相似文献   

10.
对于感兴趣区域编码一直是图像压缩的热点之一。结合新一代静止图像压缩标准JPEG2000,介绍了JPEG2000中两种经典的ROI压缩算法:最大化位移法和一般位移法,分析了这两种方法的实现原理和优缺点,提出了一种新的JPEG2000感兴趣区域编码方法。该方法利用JPEG2000中“失真可伸缩性”的特点,对于感兴趣区域和背景区域给定的目标质量,在质量层上根据给定的目标质量对包进行丢弃处理以达到压缩的目的,没有经典ROI(Region Of Interest)压缩方法的移位处理,所以压缩效率略好于经典压缩方法,压缩时间小于经典ROI方法所需的时间。实验结果表明,在相同的压缩比特率下,压缩效果和压缩时间优于JPEG2000中经典的压缩方法  相似文献   

11.
图像质量评价采用小波变换把自然图像分解成十个子带,在每个子带中设置两个参数和两个Logistic函数.通过计算客观度量与主观评估测试结果之间的关系来定义两个基于感知的模型,评估测试在包含344幅JPEG和JPEG2000图像的数据库中进行.线性预测模型能够反映图像的真实主观评分,而相关预测模型则能用来区分图像间的相对质量.在这两个模型中,每个客观参数和主观感知特性的关系用两个Logistic函数逼近,从而获得每个参数的最佳估计缺损程度.最终的图像质量度量通过组合各个估计缺损程度获得.实验表明这两种模型性能优良,和主观感知的图像质量有很强的相关性.  相似文献   

12.
Effective compound image compression algorithms require compound images to be first segmented into regions such as text, pictures and background to minimize the loss of visual quality of text during compression. In this paper, a new compound image segmentation algorithm based on the Mixed Raster Content model (MRC) of multilayer approach is proposed (foreground/mask/background). This algorithm first segments a compound image into different classes. Then each class is transformed to the three-layer MRC model differently according to the property of that class. Finally, the foreground and the background layers are compressed using JPEG 2000. The mask layer is compressed using JBIG2. The proposed morphological-based segmentation algorithm design a binary segmentation mask which partitions a compound image into different layers, such as the background layer and the foreground layer accurately. Experimental results show that it is more robust with respect to the font size, style, colour, orientation, and alignment of text in an uneven background. At similar bit rates, our MRC compression with the morphology-based segmentation achieves a much higher subjective quality and coding efficiency than the state-of-the-art compression algorithms, such as JPEG, JPEG 2000 and H.264/AVC-I.  相似文献   

13.
High-dynamic-range still-image encoding in JPEG 2000   总被引:1,自引:0,他引:1  
The raw size of a high-dynamic-range (HDR) image brings about problems in storage and transmission. Many bytes are wasted in data redundancy and perceptually unimportant information. To address this problem, researchers have proposed some preliminary algorithms to compress the data, like RGBE/XYZE, OpenEXR, LogLuv, and so on. HDR images can have a dynamic range of more than four orders of magnitude while conventional 8-bit images retain only two orders of magnitude of the dynamic range. This distinction between an HDR image and a conventional image leads to difficulties in using most existing image compressors. JPEG 2000 supports up to 16-bit integer data, so it can already provide image compression for most HDR images. In this article, we propose a JPEG 2000-based lossy image compression scheme for HDR images of all dynamic ranges. We show how to fit HDR encoding into a JPEG 2000 encoder to meet the HDR encoding requirement. To achieve the goal of minimum error in the logarithm domain, we map the logarithm of each pixel value into integer values and then send the results to a JPEG 2000 encoder. Our approach is basically a wavelet-based HDR still-image encoding method.  相似文献   

14.
A face hallucination algorithm is proposed to generate high-resolution images from JPEG compressed low-resolution inputs by decomposing a deblocked face image into structural regions such as facial components and non-structural regions like the background. For structural regions, landmarks are used to retrieve adequate high-resolution component exemplars in a large dataset based on the estimated head pose and illumination condition. For non-structural regions, an efficient generic super resolution algorithm is applied to generate high-resolution counterparts. Two sets of gradient maps extracted from these two regions are combined to guide an optimization process of generating the hallucination image. Numerous experimental results demonstrate that the proposed algorithm performs favorably against the state-of-the-art hallucination methods on JPEG compressed face images with different poses, expressions, and illumination conditions.  相似文献   

15.
基于Hilbert-Huang变换的JPEG2000隐写分析   总被引:1,自引:0,他引:1  
实现了针对由Su等人提出的JPEG2000 Lazy—mode隐写术的可靠检测.在理论和实验分析的基础上,文章揭示了由Lazy—mode隐写术生成的掩密图像,其子带代码块噪声方差序列的振荡特征异于非掩密含噪图像的子带代码块噪声方差序列.因此,此文隐写检测算法的关键在于针对这两种子带代码块噪声方差序列进行序列分析,提取它们内在的振荡特征差异.在序列分析中,通过引入Hilbert—Huang变换,对噪声方差序列进行经验模式分解,构建了基于Hilbert谱的特征向量.实验表明,基于该特征向量的支持向量机(SVM)分类器能以平均90.6%的准确率识别掩密图像.根据检索,目前尚未有对JPEG2000Lazymode隐写术进行成功分析的报道,因此,该文具有重大意义.  相似文献   

16.
In this paper, some fast feature extraction algorithms are addressed for joint retrieval of images compressed in JPEG and JPEG2000 formats. In order to avoid full decoding, three fast algorithms that convert block-based discrete cosine transform (BDCT) into wavelet transform are developed, so that wavelet-based features can be extracted from JPEG images as in JPEG2000 images. The first algorithm exploits the similarity between the BDCT and the wavelet packet transform. For the second and third algorithms, the first algorithm or an existing algorithm known as multiresolution reordering is first applied to obtain bandpass subbands at fine scales and the lowpass subband. Then for the subbands at the coarse scale, a new filter bank structure is developed to reduce the mismatch in low frequency features. Compared with the extraction based on full decoding, there is more than 72% reduction in computational complexity. Retrieval experiments also show that the three proposed algorithms can achieve higher precision and recall than the multiresolution reordering, especially around the typical range of compression ratio.  相似文献   

17.
为了与图像压缩标准JPEG2000相适应,提出了两种可直接在JPEG2000压缩域内进行信息隐藏的方法,分别采用JPEG2000编码过程和解码过程进行实现。它们既支持无损压缩,也适用于有损压缩,从而具有较广的适用范围。实验结果表明这两种实现方法具有较好的不可见性和压缩性。  相似文献   

18.
In this paper, we propose the embedding of a prediction mechanism into a part of the coding structure of JPEG2000 image compression standard, in order to reduce the amount of bits sent to the arithmetic coder, without any significant changes into the standard architecture and without loosing performance. The prediction is based upon an innovative processing of the data structures used by the standard JPEG2000 in progressive coding and the addition of a Prediction Matrix, whose computation does not add any overhead at the decoder side. Experiments are performed to test the efficacy of the prediction mechanism, and results are compared to the standard JPEG2000 and other similar approaches. Tests are documented over a set of well-known images from literature, also against different kinds of added noise. Performance, in terms of saved bits are reported, and a new figure of merit is defined to test the efficiency of Prediction. The results prove that the new proposal overcomes the standard and other related approaches for the entire set of referenced images, with significant gain in synthetic images, also in presence of noise.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号