首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 250 毫秒
1.
JPEG2000全通道并行EBCOT-Tier1编码器结构设计   总被引:3,自引:0,他引:3  
新一代静止图像压缩标准JPEG2000采用了EBCOT算法。该算法Tier1部分在上下文生成过程中需要对位平面进行多次通道扫描,效率很低,难以满足高质量图像实时压缩的要求。目前已有多种改进方案被相继提出,主要基于PS/GOCS和多窗口通道并行扫描。该文设计出一种适用于硬件实现的单窗口全通道并行编码结构,目前已通过FPGA验证。实验表明,该结构下Tier1编码速度明显优于现有几种优化方案。同时,本设计所采用的编码逻辑在解码过程中亦可使用,便于进行编解码复用设计。  相似文献   

2.
一种基于小波系数上下文模型的图像压缩方法   总被引:5,自引:3,他引:2  
提出了二种新颖的基于小波系数上下文模型的图像压缩方法,该方法通过量化当前系数的线性预测值形成上下文.进行自适应的算术编码;同时利用了小波变换的多分辨率性质,以渐近分辨率的方式压缩图片,具有分辨率可扩展性。实验结果表明,该方法获得的无损压缩比高于SPIHT和用于JPEG2000的EBCOT,在各分辨率下的压缩比也高于EBCOT.压缩时间也比EBCOT要少。  相似文献   

3.
EBCOT双上下文窗口并行编码及FPGA实现   总被引:1,自引:0,他引:1  
JPEG2000编码系统中,EBCOT的编码速度已经成为整个系统编码效率的瓶颈。通过研究EBCOT编码原理和通道并行算法的编码过程,提出了双上下文窗口位并行的EBCOT系数位建模方法。详细说明了使用该算法的系数位建模系统的硬件结构。系数位编码系统有效减少了编码时钟周期数,并在FPGA上进行了功能验证。  相似文献   

4.
分析了即将推出的JPEG2000标准算法和基于自适应的上下文预测编码技术,提出了小波变换有损编码加自适应上下文预测无损残差编码的具有抗差能力的SAR图像无损压缩算法。该算法既具有小波变换图像编码的累进传输,对信道具有一定鲁棒性的特点,而且压缩率高于即将推出的国际标准JPEG2000的无损压缩和国际标准算法JPEG—LS。  相似文献   

5.
该文提出了截断点可行性化的率失真优化算法,该算法根据率失真斜率最陡下降准则对精细编码过程相关联的截断点进行重新排序。新算法较JPEG2000减少了超过50%的非可行性截断点;获得比JPEG2000更精细的嵌入式比特流;相同的压缩比下重建图像的PSNR比JPEG2000高0.1dB~0.2dB;耗费的比特平面位编码时间少于JPEG2000。  相似文献   

6.
该文提出了基于层次模型的重要性编码算法,该算法在不改变JPEG2000上下文模型前提下按照重要系数的空间聚集区域进行层次邻域编码。实验结果表明:新算法输出的码流的自相关性较JEG2000以及Hilbert曲线输出的码流的自相关性好;新算法的平均码率较JPEG2000的条带扫描以及Hilbert曲线扫描的平均码率分别提高了1.06%,0.57%;而且,新算法的平均码率较JPEG2000的上下文量化优化算法获得的平均码率提高了1.08%。  相似文献   

7.
章春娥  裘正定 《信号处理》2004,20(4):331-335
本文提出了一种将数字水印技术和JPEG2000压缩编码实现过程相统一的方法。JPEG2000是新一代静止图像压缩编码标准,它是在EBCOT(优化截断嵌入式块编码)算法的基础上形成的。本文结合JPEG2000格式压缩码流的形成过程,以位平面为处理平台嵌入水印,根据需要在图像解码端检测水印,实现水印技术和编码算法的统一。与传统方法相比,该方法节省了将嵌入水印的图像进行二次压缩的运算成本,提高了检测水印的效率:充分利用了JPEG2000编码的优越性,在图像进行渐进的有损传输或在较高的压缩比时仍能有效地检测到水印,具有较高的鲁棒性。  相似文献   

8.
JPEG2000是国际标准化组织(ISO)指定的新一代静止图像压缩标准。它作为JPEG的升级版,向下兼容,具有比JPEG更高的压缩率,同时具有一些新的特征。这里主要介绍了JPEG2000中的核心算法EBCOT(embedded block coding with optimized truncation)—基于优化截断的嵌入式块编码算法。以此阐述了JPEG2000压缩标准新的特征以及与现有压缩标准相比显示出来的优越性能。  相似文献   

9.
曾勇 《电子科技》2011,24(7):122-125
提出了一种新的基于JPEG2000压缩算法的码率控制算法,使得JPEG2000标准的压缩编码效率得到较大范围的提升。它在渐进过程截断算法的基础上,结合逐层位平面截断算法,减少了冗余的编码量和算法复杂度,同时经过大量测试,该算法的PSNR值比JPEG2000标准压缩算法略低0.05~0.1 dB。  相似文献   

10.
JPEG2000标准研究及其与JPEG标准比较   总被引:1,自引:0,他引:1  
陈鑫 《信息技术》2015,(4):137-140,144
随着多媒体技术和网络技术的飞速发展,JPEG标准已经无法满足人类的要求,于是产生了新一代静态图像压缩技术—JPEG2000。JPEG2000编码系统由预处理、离散小波变换(DWT)、量化和熵编码组成,其采用了DWT和优化截取嵌入块编码(EBCOT)算法,具有JPEG标准无法比拟的优势。介绍了JPEG2000编码系统的组成、优点以及与JPEG的对比。实验证明,相同压缩率下,JPEG2000压缩比JPEG获得了更好的图像质量。  相似文献   

11.
In this paper, a new binary arithmetic coding strategy with adaptive-weight context classification is introduced to solve the context dilution and context quantization problems for bitplane coding. In our method, the weight, obtained using a regressive–prediction algorithm, represents the degree of importance of the current coefficient/block in the wavelet transform domain. Regarding the weights as contexts, the coder reduces the context number by classifying the weights using the Lloyd–Max algorithm, such that high-order is approximated as low-order context arithmetic coding. The experimental results show that our method effectively improves the arithmetic coding performance and outperforms the compression performances of SPECK, SPIHT and JPEG2000.  相似文献   

12.
In this paper, multiwavelets are considered in the context of image compression based on the human vision system (HVS). First, selecting the BSA (4/4)* filters, a twodimensional image is transformed with our proposed algorithm I. Second, we apply HVS coefficients into the subbands of the transformed image. Third, we split the coefficients into two parts: the significance map and residue map. Then a new modified set partitioning in hierarchical trees (SPIHT) algorithm is proposed to encode the significance map. Fourth, algorithm III is presented for coding the residue map. Finally, we adopt context-based adaptive arithmetic coding to encode the bit stream. We also provide some experimental results proving that multiwavelets are worth studying and compare them with those of other multiwavelet and JPEG2000 algorithms.  相似文献   

13.
To maximize rate distortion performance while remaining faithful to the JPEG syntax, the joint optimization of the Huffman tables, quantization step sizes, and DCT indices of a JPEG encoder is investigated. Given Huffman tables and quantization step sizes, an efficient graph-based algorithm is first proposed to find the optimal DCT indices in the form of run-size pairs. Based on this graph-based algorithm, an iterative algorithm is then presented to jointly optimize run-length coding, Huffman coding, and quantization table selection. The proposed iterative algorithm not only results in a compressed bitstream completely compatible with existing JPEG and MPEG decoders, but is also computationally efficient. Furthermore, when tested over standard test images, it achieves the best JPEG compression results, to the extent that its own JPEG compression performance even exceeds the quoted PSNR results of some state-of-the-art wavelet-based image coders such as Shapiro's embedded zerotree wavelet algorithm at the common bit rates under comparison. Both the graph-based algorithm and the iterative algorithm can be applied to application areas such as web image acceleration, digital camera image compression, MPEG frame optimization, and transcoding, etc.   相似文献   

14.
In image compression context-based entropy coding is commonly used. A critical issue to the performance of context-based image coding is how to resolve the conflict of a desire for large templates to model high-order statistic dependency of the pixels and the problem of context dilution due to insufficient sample statistics of a given input image. We consider the problem of finding the optimal quantizer Q that quantizes the K-dimensional causal context Ct = (Xt-t1,Xt-t2,...,X t-tK) of a source symbol Xt into one of a set of conditioning states. The optimality of context quantization is defined to be the minimum static or minimum adaptive code length of given a data set. For a binary source alphabet an optimal context quantizer can be computed exactly by a fast dynamic programming algorithm. Faster approximation solutions are also proposed. In case of m-ary source alphabet a random variable can be decomposed into a sequence of binary decisions, each of which is coded using optimal context quantization designed for the corresponding binary random variable. This optimized coding scheme is applied to digital maps and alpha-plane sequences. The proposed optimal context quantization technique can also be used to establish a lower bound on the achievable code length, and hence is a useful tool to evaluate the performance of existing heuristic context quantizers.  相似文献   

15.
为了提高图像的压缩比和压缩质量,结合人眼对比度敏感视觉特性和图像变换域频谱特征,该文提出一种自适应量化表的构建方法。并将该表代替JPEG中的量化表,且按照JPEG的编码算法对3幅不同的彩色图像进行了压缩仿真实验验证,同时与JPEG压缩作对比分析。实验结果表明:与JPEG压缩方法相比,在相同的压缩比下,采用自适应量化压缩后,3幅解压彩色图像的SSIM和PSNR值分别平均提高了1.67%和4.96%。表明该文提出的结合人眼视觉特性的自适应量化是一种较好的、有实用价值的量化方法。  相似文献   

16.
针对联合图像专家组(JPEG)标准设计了一种基于自适应下采样和超分辨力重建的图像压缩编码框架。在编码器端,为待编码的原始图像设计了多种不同的下采样模式和量化模式,通过率失真优化算法从多种模式中选择最优的下采样模式(DSM)和量化模式(QM),最后待编码图像将在选择的模式下进行下采样和JPEG编码;在解码器端,采用基于卷积神经网络的超分辨力重建算法对解码后的下采样图像进行重建。此外,所提出的框架扩展到JPEG2000压缩标准下同样有效可行。仿真实验结果表明,相比于主流的编解码标准和先进的编解码方法,提出的框架能有效地提升编码图像的率失真性能,并能获得更好的视觉效果。  相似文献   

17.
The wireless sensor network utilizes image compression algorithms like JPEG, JPEG2000, and SPIHT for image transmission with high coding efficiency. During compression, discrete cosine transform (DCT)–based JPEG has blocking artifacts at low bit-rates. But this effect is reduced by discrete wavelet transform (DWT)–based JPEG2000 and SPIHT algorithm but it possess high computational complexity. This paper proposes an efficient lapped biorthogonal transform (LBT)–based low-complexity zerotree codec (LZC), an entropy coder for image coding algorithm to achieve high compression. The LBT-LZC algorithm yields high compression, better visual quality with low computational complexity. The performance of the proposed method is compared with other popular coding schemes based on LBT, DCT and wavelet transforms. The simulation results reveal that the proposed algorithm reduces the blocking artifacts and achieves high compression. Besides, it is analyzed for noise resilience.  相似文献   

18.
用于JPEG2000图像认证的半脆弱性数字水印算法   总被引:23,自引:0,他引:23       下载免费PDF全文
张静  张春田 《电子学报》2004,32(1):157-160
本文给出一种适用于JPEG2000图像认证的半脆弱性数字水印算法,该算法与JPEG2000编、解码器相结合,它根据JPEG2000图像压缩过程中的不变参量进行水印生成和嵌入调制,利用小波变换特性对图像篡改区域进行定位.实验表明,采用该算法实现的水印不仅具有良好的视觉透明性,而且还有较佳的篡改区域指示功能.  相似文献   

19.
基于H.26L的精细度可伸缩视频编码   总被引:5,自引:0,他引:5  
提出了一种基于H.26L精细度可伸缩(fine granularity scalability)视频编码方案,称为EFGS-H.26L。在该方案中,以MPEG-4的FGS为基础构造了一种新的可伸缩结构(EFGS,enhanced fine granularity scalability),在EFGS结构中,基本层采用H.26L编码,增强层采用类似于JPEG2000的基于上下文的位平面编码。由于H.26L优良的编码性能,使得基本层的编码效率大大提高,为了提高增强层的编码效率,首先把残余图像按子带的顺序重新排列,这样就可以利用子带系数的相关性来实现冗余信息消除。JPEG2000标准中的EBCOT算法已经被证明是非常高效的位平面编码方法,所以对重排后的DCT系数采用一种类似于JPEG2000的基于上下文的位平面编码方法。实验结果证明,在高比特率时,本文提出的精细度可伸缩编码方案编码效率比MPEG-4中的FGS提高3.0dB左右。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号