首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 167 毫秒
1.
对基于三维小波变换的视频编码进行了研究。由于视频图像传统的三维小波变换结构存在着诸多不足.文章提出了一种改进的三维小波变换结构。在给定的比特率条件下,通过对使得解码图像量化误差达到最小的最佳比特分配策略进行研究,给出了在均匀量化情况下的改进三维小波变换结构的量化步长。实验结果表明该方法的压缩性能明显优于传统方法。  相似文献   

2.
马波  裘正定 《电子学报》2000,28(1):53-56
本文利用线性系统理论对Davis所采用的自子树量化(SQS)分形-小波变换图像编码算法进行了深入分析,并发现SQS变换的吸引子与动力系统的稳定状态具有一致性.因此编码过程实际上就是对动力系统的参数进行编码.通过这种分析使我们了解到了尺度函数系数的量化误差是怎样影响解码图像的,从而可以更有效地控制解码误差,并且由此还可以更深刻地认识SQS算法中直接存储尺度函数系数方案给编、解码带来的巨大好处.  相似文献   

3.
张江山  朱光喜 《电子学报》2003,31(2):232-234
本文提出一种基于噪声模型的小波系数量化补偿方法,在小波变换域上对零截断系数采用简单的噪声模型进行补偿,解码端根据模型参数,合成随机纹理,改善因系数零截断量化造成的随机纹理失真,从而提高解码图像的主观视觉质量.实验结果表明,模型参数编码只需0.013bpp.在相同编码比特率的情况下,与JPEG2000缺省编码方法相比,图像主观质量得到很大改善.  相似文献   

4.
利用离散余弦变换(DCT)对图像进行分块压缩,即对每个方块执行二维离散余弦变换,将变换得到的量化DCT系数进行编码和传送,形成压缩后的图像格式。在接收端,将量化的DCT系数进行解码,并对每个8×8方块进行二维IDCT,最后将操纵完成的方块组合成一幅完整的图像。实验主要利用了DCT域能量具有方向性的特点,通过计算机仿真实验,表明量化值的选取对图像压缩有明显的影响,因此,要根据图像质量来决定量化值的大小。  相似文献   

5.
针对联合图像专家组(JPEG)标准设计了一种基于自适应下采样和超分辨力重建的图像压缩编码框架。在编码器端,为待编码的原始图像设计了多种不同的下采样模式和量化模式,通过率失真优化算法从多种模式中选择最优的下采样模式(DSM)和量化模式(QM),最后待编码图像将在选择的模式下进行下采样和JPEG编码;在解码器端,采用基于卷积神经网络的超分辨力重建算法对解码后的下采样图像进行重建。此外,所提出的框架扩展到JPEG2000压缩标准下同样有效可行。仿真实验结果表明,相比于主流的编解码标准和先进的编解码方法,提出的框架能有效地提升编码图像的率失真性能,并能获得更好的视觉效果。  相似文献   

6.
一种基于小波的多光谱图像压缩方法   总被引:2,自引:0,他引:2  
潘波  金心宇 《激光与红外》2005,35(6):447-450
文章提出一种基于Karhunen2Loeve变换(KLT)和小波量化编码的多光谱图像压缩方法。该法首先使用KL变换步骤来去除谱间冗余,而后对各变换波段图像进行小波变换,并利用均匀阈值网格编码量化方法来量化小波子带图像,最后使用算术编码对量化结果进行熵编码。为使编码器能为所有谱段各子带获取率- 失真意义上最优的量化阈值,本文提出基于子带图像统计特性和网格编码量化器率- 失真特性的比特分配算法。实验表明,本方法能高效地压缩多光谱图像,表现出优异的压缩性能。  相似文献   

7.
在无反馈分布式视频编码系统中,提出了一种Wyner-Ziv帧的顽健重构算法。针对比特面解码错误带来的视频质量下降问题,对DC系数和AC系数使用不同重构方法,特别是对于解码失败的DC系数量化值,利用编码端原始图像的相关信息自适应地调整边信息量化值和解码失败量化值对重构的贡献,从而完成重构。实验结果表明,与最小均方误差重构算法相比,该算法可以有效提高解码视频的平均PSNR(peak signal-to-noise ratio),且解码视频图像的主观质量有明显改善。  相似文献   

8.
基于塔式格型矢量量化的图像多描述编码算法   总被引:5,自引:0,他引:5  
多描述编码(MDC)是解决差错信道上图像通信数据包丢失问题的一种新方法,它通过将图像分解为多个独立而又具有一定相关性的描述,并通过不同的信道进行传输,来改善数据丢失条件下的图像解码质量。本文提出了一种图像信号的多描述塔式格型矢量量化编码算法(MDPLVQ),利用小波树之间的独立性,采用不同的塔式格型矢量量化缩放因子对小波系数进行量化。该算法设计简单,对冗余度的控制容易,实验结果说明了其有效性,其编码压缩性能优于多描述标量量化(MDSQ)、多描述对变换(MDPCT)和多描述零又树(MDEZW)等方法。  相似文献   

9.
充分利用彩色视频亮度和色度分量之间的相关性,提出了一种基于DT网格的彩色视频帧间编码方案。该方案在运动估计时仅对亮度分量Y进行DT(Delaunay triangulation)描述,对亮度分量的网格节点进行连续运动估计,利用6参数方法得到三角形内像素点的运动矢量,经过相似变换得到色度分量的运动矢量。另外对残差图像进行特殊处理和编码。实验结果表明,相对于适合低码率传输的H.263编码方法,在相同的压缩比下该方案解码图像有更好的主客观质量。  相似文献   

10.
本文以整型提升小波变换和SPIHT编码算法为基础,提出了一种基于四叉树分割量化与关联模型的新图像编码算法,该算法的基本工作步骤为:(1)对原图像进行整型提升小波变换,以得到不同分辨率不同方向的多个图像子带;(2)对最低频子带进行DPCM编码,对高频子带进行四叉树分割量化编码;(3)采用关联模型完成最后的算术编码。对比实验表明:本文算法在编解码速度、图像复原质量等方面均优于SPIHT等算法。  相似文献   

11.
We propose two new image compression-decompression methods that reproduce images with better visual fidelity, less blocking artifacts, and better PSNR, particularly in low bit rates, than those processed by the JPEG Baseline method at the same bit rates. The additional computational cost is small, i.e., linearly proportional to the number of pixels in an input image. The first method, the "full mode" polyharmonic local cosine transform (PHLCT), modifies the encoder and decoder parts of the JPEG Baseline method. The goal of the full mode PHLCT is to reduce the code size in the encoding part and reduce the blocking artifacts in the decoder part. The second one, the "partial mode" PHLCT (or PPHLCT for short), modifies only the decoder part, and consequently, accepts the JPEG files, yet decompresses them with higher quality with less blocking artifacts. The key idea behind these algorithms is a decomposition of each image block into a polyharmonic component and a residual. The polyharmonic component in this paper is an approximate solution to Poisson's equation with the Neumann boundary condition, which means that it is a smooth predictor of the original image block only using the image gradient information across the block boundary. Thus, the residual--obtained by removing the polyharmonic component from the original image block--has approximately zero gradient across the block boundary, which gives rise to the fast-decaying DCT coefficients, which, in turn, lead to more efficient compression-decompression algorithms for the same bit rates. We show that the polyharmonic component of each block can be estimated solely by the first column and row of the DCT coefficient matrix of that block and those of its adjacent blocks and can predict an original image data better than some of the other AC prediction methods previously proposed. Our numerical experiments objectively and subjectively demonstrate the superiority of PHLCT over the JPEG Baseline method and the improvement of the JPEG-compressed images when decompressed by PPHLCT.  相似文献   

12.
A transform domain distributed video coding (DVC) codec is proposed using turbo trellis coded modulation (TTCM). TTCM symbols are generated at the DVC decoder using the side information and the parity bits received from the DVC encoder. These generated symbols are used at the TTCM-based DVC decoder to decode the bit stream. Simulation results show that a significant rate-distortion performance gain can be achieved using the proposed codec compared to the best state-of-the-art transform domain DVC codecs discussed in the literature.  相似文献   

13.
In this paper, we propose a perceptual-based distributed video coding (DVC) technique. Unlike traditional video codecs, DVC applies video prediction process at the decoder side using previously received frames. The predicted video frames (i.e., side information) contain prediction errors. The encoder then transmits error-correcting parity bits to the decoder to reconstruct the video frames from side information. However, channel codes based on i.i.d. noise models are not always efficient in correcting video prediction errors. In addition, some of the prediction errors do not cause perceptible visual distortions. From perceptual coding point of view, there is no need to correct such errors. This paper proposes a scheme for the decoder to perform perceptual quality analysis on the predicted side information. The decoder only requests parity bits to correct visually sensitive errors. More importantly, with the proposed technique, key frames can be encoded at higher rates while still maintaining consistent visual quality across the video sequence. As a result, even the objective PSNR measure of the decoded video sequence will increase too. Experimental results show that the proposed technique improves the R-D performance of a transform domain DVC codec both subjectively and objectively. Comparisons with a well-known DVC codec show that the proposed perceptual-based DVC coding scheme is very promising for distributed video coding framework.  相似文献   

14.
Side information has a significant influence on the rate-distortion(RD) performance of distributed video coding(DVC). In the conventional motion compensated frame interpolation scheme, all blocks adopt the same side-information generation method regardless of the motion intensity inequality at different regions. In this paper, an improved method is proposed. The image blocks are classified into two modes, fast motion and slow motion, by simply computing the discrete cosine transformation(DCT) coefficients at the encoder. On the decoder, it chooses the direct interpolation and refined motion compensated interpolation correspondingly to generate side information. Experimental results show that the proposed method, without increasing the encoder complexity, can increase the average peak signal-to-noise ratio(PSNR) by up to 1~ 2 dB compared with the existing algorithm. Meanwhile, the proposed algorithm significantly improves the subjective quality of the side information.  相似文献   

15.
Multiple side information streams for distributed video coding   总被引:2,自引:0,他引:2  
An improved Wyner-Ziv decoder for distributed video coding (DVC) is proposed, which uses multiple side information streams obtained by using multiple reference frames. Simulation results show that the proposed algorithm can achieve a significant PSNR gain of up to 2.4 dB over the best available DVC codec at the same bit rate  相似文献   

16.
Recently, several distributed video coding (DVC) solutions based on the distributed source coding (DSC) paradigm have appeared in the literature. Wyner–Ziv (WZ) video coding, a particular case of DVC where side information is made available at the decoder, enable to achieve a flexible distribution of the computational complexity between the encoder and decoder, promising to fulfill novel requirements from applications such as video surveillance, sensor networks and mobile camera phones. The quality of the side information at the decoder has a critical role in determining the WZ video coding rate-distortion (RD) performance, notably to raise it to a level as close as possible to the RD performance of standard predictive video coding schemes. Towards this target, efficient motion search algorithms for powerful frame interpolation are much needed at the decoder. In this paper, the RD performance of a Wyner–Ziv video codec is improved by using novel, advanced motion compensated frame interpolation techniques to generate the side information. The development of these type of side information estimators is a difficult problem in WZ video coding, especially because the decoder only has available some reference, decoded frames. Based on the regularization of the motion field, novel side information creation techniques are proposed in this paper along with a new frame interpolation framework able to generate higher quality side information at the decoder. To illustrate the RD performance improvements, this novel side information creation framework has been integrated in a transform domain turbo coding based Wyner–Ziv video codec. Experimental results show that the novel side information creation solution leads to better RD performance than available state-of-the-art side information estimators, with improvements up to 2 dB; moreover, it allows outperforming H.264/AVC Intra by up to 3 dB with a lower encoding complexity.  相似文献   

17.
The wireless sensor network utilizes image compression algorithms like JPEG, JPEG2000, and SPIHT for image transmission with high coding efficiency. During compression, discrete cosine transform (DCT)–based JPEG has blocking artifacts at low bit-rates. But this effect is reduced by discrete wavelet transform (DWT)–based JPEG2000 and SPIHT algorithm but it possess high computational complexity. This paper proposes an efficient lapped biorthogonal transform (LBT)–based low-complexity zerotree codec (LZC), an entropy coder for image coding algorithm to achieve high compression. The LBT-LZC algorithm yields high compression, better visual quality with low computational complexity. The performance of the proposed method is compared with other popular coding schemes based on LBT, DCT and wavelet transforms. The simulation results reveal that the proposed algorithm reduces the blocking artifacts and achieves high compression. Besides, it is analyzed for noise resilience.  相似文献   

18.
结合人眼的视觉特性,提出了一种基于DCT域的AVS视频编解码标准的视频盲水印算法.算法首先对水印图像进行增强的Arnold变换,得到置乱的二值水印图像;然后根据嵌入公式替换嵌入点的系数,将水印信息自适应嵌入到Ⅰ帧的DCT系数中.提取水印时不需要原始视频,可实现盲水印.实验结果表明,该算法对高斯、帧剪切、盐噪声等攻击,具有较好的稳健性和不可见性.  相似文献   

19.
A new combination of coding methods for a 64 kbit/s transmission system for typical videophone situations is investigated. The codec structure is based on a standard hybrid discrete cosine transform (DCT) codec with temporal prediction. The picture is divided blockwise into changed and unchanged areas. One motion vector with subpel accuracy is computed and transmitted for each block of the changed area. For the forward analysis, the prediction error is calculated in the whole picture. Only the blocks with the highest prediction errors are updated by a DCT with a perception adaptive quantization. The number of DCT update blocks depends on the remaining bits after the transmission of the overhead information. The codec is controlled by a forward analysis of the prediction error and is not based on a buffer control. The spatial resolution of the source signal is reduced in two steps to prevent a codec overload caused by too much activity between two frames.  相似文献   

20.
Yamatani and Saito recently published an interesting method for predicting discrete cosine transform (DCT) coefficients of an image block, which uses partial derivatives of the image at the block boundary points. It estimates partial derivatives the same way for all four side boundary points. In this correspondence, we improve their estimation method for the left and top side boundary points by observing that the decoder can use 1-D DCT of the rightmost column of pixels of the block on the left side and bottom row pixels of the block on the top side instead of using just the DC of these two blocks. It led us to revise their prediction equations. Experimental results show that the cumulative reduction in the size of the first five AC coefficients obtained using their equations is 15.1%, and the same using our equations is 24.6%.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号