首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 185 毫秒
1.
谭丽  孙季丰 《电子学报》2015,43(5):1007
提出一种高通量DNA序列数据的压缩算法。该算法先采用码书索引变换模型,将传统码书索引值的表示方法变换成由四个标准碱基字符替代的四进制数值方式,并采用一种界定替换串与非替换串的简明编码方法,接着通过信息熵的大小来决定是否进行块排序压缩变换(BWT ),最后进行前移编码变换和Huffman熵编码。在多种测序数据集上的实验结果表明,CITD在大多数情况下可以获得比本文所对比的高通量DNA专用压缩方法更优的压缩性能。  相似文献   

2.
动态点云能有效描述自然场景与3D对象,提供沉浸式视觉体验;但其数据量庞大。需对其进行有效压缩。提出了采用显著性引导的恰可察觉失真(Saliency-guided Just Noticeable Distortion, SJND)模型的动态点云感知编码方法。针对纹理图感知冗余,构建了基于离散余弦变换域的SJND模型,应用于纹理图编码过程中的DCT系数抑制;考虑到相同失真等级下显著区域的几何失真更易被察觉,提出使用投影显著图将几何图进行分层;最后,为不同层级的编码树单元进行自适应量化参数选择和编码。与V-PCC标准方法相比,在保证动态点云视觉质量的前提下,所提出方法提升了动态点云的编码效率。  相似文献   

3.
由于WBCT压缩算法对于不同平滑度图像无差别滤波,使得对于较平滑的图像,其恢复图像效果要低于一般小波编码方法,为此,本文针对不同平滑度的图像,通过定义图像平滑度,对图像进行分类,利用多方向多尺度临界采样,进行不同程度的方向滤波。笔者利用变换后的小波系数的特点,采用SPIHT算法实现嵌入式编码,从而改进了WBCT压缩算法...  相似文献   

4.
点云编码是支撑点云广泛应用的关键技术之一,是近期技术研究和标准化领域的热点。对点云几何信息和属性信息编码技术演进进行了回顾,并针对稠密点云和稀疏点云的几种典型编码方法的编码效率进行了比较。未来点云编码研究将集中于利用帧间预测去除动态点云的不同帧之间的相关性,以及端到端点云编码、任务驱动的点云编码等方面。  相似文献   

5.
吴家骥  吴成柯  吴振森 《电子学报》2006,34(10):1828-1832
感兴趣区(ROI)编码是在JPEG2000中提出的一种重要的技术,然而JPEG2000算法却无法同时支持任意形状ROI和任意提升因子.本文提出了一种基于任意形状ROI和3D提升小波零块编码的3D体数据图像压缩算法.新的算法支持ROI内外从有损到无损的编码.一种简单的任意形状无损ROI掩码(Mask)生成方法被提出.考虑到3D子带的特点,我们采用改进的3DSPECK零块算法对变换后的系数进行编码.一些其它支持任意形状ROI编码的算法也在本文中被评估,试验显示本文算法具有更好的编码性能.  相似文献   

6.
通过对图像小波变换系数的分析,根据SPECK算法对较高频子带上重要系数的编码问题,提出了一种采用阶梯量化优化的SPECK 算法QSPECK,以优化SPECK算法的编码效率。即先对小波系数矩阵的较高频子带进行阶梯量化,再对量化后的矩阵进行块编码,最后进行逐次逼近量化获得嵌入式码流。实验证明:此算法优化了编码效率,提高了PSNR值。  相似文献   

7.
为了更加有效地解决JPEG2000位平面编码算法实现的低效问题,提出了一种快速的基于字级顺序的嵌入式块编码方法。新的编码方法采用对样本的编码通道预测和样本的上下文形成并行流水线执行的技术,实现了对一个子块系数的所有位平面多通道编码的一次扫描顺序完成。提出了一种新的实现字级顺序且多字并行的高速结构,能够在一个时钟周期同时执行对一个条带列4个系数的上下文形成,完成对一个N×N子块系数的上下文形成只需大约N2/4个时钟周期。理论分析和实验结果表明,新的结构比较同类最新设计具有更快的数据处理能力和良好的加速比成本的性能。  相似文献   

8.
提出了一种新的基于小波系数重要图编码的图像压缩算法。该算法根据量化后的波系数的特点进行了一种期望排序,然后舍掉序列后面大量的零值小波系数,从而得到一个波系数子集,能以少的小波系数来很好地逼近原始图像,省去了零树编码中零树结构带来的大量比特开销。实验表明,该算法与MPEG-4的静止图像压缩算法相比较,重构图像的峰值信噪比(PSNR)值在相同码率下有较大的提高。  相似文献   

9.
深入分析了预测编码与DCT变换编码的异同点,并从编码增益、块效应系数、预测与变换与变换前后的能量关系等多方面比较了两种编码的优缺点。  相似文献   

10.
针对星载双视角图像压缩中存在数据量大和卫星编码端计算能力受限、内存资源不足的问题,提出了一种基于递归预测的分布式双视角图像无损压缩方法。该方法中2个视角的图像采用不同的编码方法,视角1作为关键视角采用JPEG2000无损模式进行独立压缩;视角2经图像分块预处理之后,其中一个局部图像块作为关键图像块仍用JPEG2000无损模式独立压缩,其余部分采用分布式编码方法。编码端进行二维整数小波变换去除各视角内的空间冗余,解码端利用利用视角间的相关性采用递归预测的结构,通过配准和多元线性回归的方法生成视角2其余图像的边信息辅助其余图像块解码。实验结果表明,与未考虑双视角图像间冗余的JPEG2000无损压缩编码相比,平均编码比特率大约节约了0.296~0.6 bpp,时间复杂度降低到其5.97%~14.3%;与使用初始边信息的分布式编码方案相比时间复杂度相同,但平均编码比特率大约节约了0.45~0.51 bpp;与考虑了图像间冗余的相关文献图像算法相比,所提方法的编码比特率损失0.2 bpp,但编码复杂度降低到其4.3%,说明所提方法更具优势,满足星载图像压缩需求。  相似文献   

11.
在无反馈分布式视频编码系统中,提出了一种Wyner-Ziv帧的顽健重构算法。针对比特面解码错误带来的视频质量下降问题,对DC系数和AC系数使用不同重构方法,特别是对于解码失败的DC系数量化值,利用编码端原始图像的相关信息自适应地调整边信息量化值和解码失败量化值对重构的贡献,从而完成重构。实验结果表明,与最小均方误差重构算法相比,该算法可以有效提高解码视频的平均PSNR(peak signal-to-noise ratio),且解码视频图像的主观质量有明显改善。  相似文献   

12.
Modified JPEG Huffman coding   总被引:3,自引:0,他引:3  
It is a well observed characteristic that when a DCT block is traversed in the zigzag order, the AC coefficients generally decrease in size and the run-length of zero coefficients increase in number. This article presents a minor modification to the Huffman coding of the JPEG baseline compression algorithm to exploit this redundancy. For this purpose, DCT blocks are divided into bands so that each band can be coded using a separate code table. Three implementations are presented, which all move the end-of-block marker up in the middle of DCT block and use it to indicate the band boundaries. Experimental results are presented to compare reduction in the code size obtained by our methods with the JPEG sequential-mode Huffman coding and arithmetic coding methods. The average code reduction to the total image code size of one of our methods is 4%. Our methods can also be used for progressive image transmission and hence, experimental results are also given to compare them with two-, three-, and four-band implementations of the JPEG spectral selection method.  相似文献   

13.
This paper presents a novel blind watermarking algorithm in DCT domain using the correlation between two DCT coefficients of adjacent blocks in the same position. One DCT coefficient of each block is modified to bring the difference from the adjacent block coefficient in a specified range. The value used to modify the coefficient is obtained by finding difference between DC and median of a few low frequency AC coefficients and the result is normalized by DC coefficient. The proposed watermarking algorithm is tested for different attacks. It shows very good robustness under JPEG image compression as compared to existing one and also good quality of watermark is extracted by performing other common image processing operations like cropping, rotation, brightening, sharpening, contrast enhancement etc.  相似文献   

14.
In the traditional approach of block transform image coding, a large number of bits are allocated to the DC coefficients. A technique called DC coefficient restoration (DCCR) has been proposed to further improve the compression ability of block transform image coding by not transmitting the DC coefficients but estimating them from the transmitted AC coefficients. Images thus generated, however, have inherent errors that degrade the image visual quality. In the paper, a global estimation DCCR scheme is proposed that can eliminate the inherent errors. The scheme estimates all the DC coefficients of the blocks simultaneously by minimising the sum of the energy of all the edge difference vectors of the image. The performance of the global estimation DCCR is evaluated using a mathematical model and experiments. Fast algorithms are also developed for efficient implementation of the proposed scheme.  相似文献   

15.
为了进一步降低高性能视频编码(HEVC)的输出码率,针对基于上下文的自适应算术编码(CABAC),提出了一种改进算法.利用大尺寸变换单元(TU)和变换跳过模式系数块大值系数较多的特点,首先在32×32变换单元中,根据已编码4×4系数组(CG)系数值的分布特性,自适应决定下一个CG的哥伦布-莱斯(Golomb-Rice)初始参数值;其次,在变换跳过模式系数块中设置初始Golomb-Rice参数为1,再利用相邻系数的相关性,根据已编码系数绝对值大小自适应决定下一系数的编码参数值.实验结果表明,与HEVC标准算法HM16.0相比,所提算法能达到0.09%~2.75%的比特率下降,平均有效率90%以上,且峰值信噪比(PSNR)无损失,编码时间平均只增加了0.08%.与代表文献相比,所提算法平均节省0.49%比特率,PSNR平均提高0.01 dB.  相似文献   

16.
提出一种基于曲率的主分量分析算法。该算法先计算各点的曲率,包括主曲率、高斯曲率和平均曲率,并将点云数据映射到曲率空间,对曲率空间进行主分量提取,然后对点云数据进行压缩。实验结果表明,该算法具有较好的压缩性能。将PCA算法引入三维点云数据压缩,弥补了传统PCA方法的缺陷。  相似文献   

17.
In image-based rendering with adjustable illumination, the data set contains a large number of pre-captured images under different sampling lighting directions. Instead of individually compressing each pre-captured image, we propose a two-level compression method. Firstly, we use a few spherical harmonic (SH) coefficients to represent the plenoptic property of each pixel. The classical discrete summation method for extracting SH coefficient requires that the sampling lighting directions should be uniformly distributed on the whole spherical surface. It cannot handle the case that the sampling lighting directions are irregularly distributed. A constrained least-squares algorithm is proposed to handle this case. Afterwards, embedded zero-tree wavelet coding is used for removing the spatial redundancy in SH coefficients. Simulation results show our approach is much superior to the JPEG, JPEG2000, MPEG2, and 4D wavelet compression method. The way to allow users to interactively control the lighting condition of a scene is also discussed.  相似文献   

18.
A simple and adaptive lossless compression algorithm is proposed for remote sensing image compression, which includes integer wavelet transform and the Rice entropy coder. By analyzing the probability distribution of integer wavelet transform coefficients and the characteristics of Rice entropy coder, the divide and rule method is used for high-frequency sub-bands and low-frequency one. High-frequency sub-bands are coded by the Rice entropy coder, and low-frequency coefficients are predicted before coding. The role of predictor is to map the low-frequency coefficients into symbols suitable for the entropy coding. Experimental results show that the average Comprcssion Ratio (CR) of our approach is about two, which is close to that of JPEG 2000. The algorithm is simple and easy to be implemented in hardware. Moreover, it has the merits of adaptability, and independent data packet. So the algorithm can adapt to space lossless compression applications.  相似文献   

19.
Data interleaving schemes have proven to be an important mechanism in reducing the impact of correlated network errors on image/video transmission. Current interleaving schemes fall into two main categories: (a) schemes that interleave pixel intensity values and (b) schemes that interleave JPEG/MPEG transform blocks. The schemes in the first category suffer in terms of lower compression ratio since highly correlated information in the spatial domain is de-correlated prior to compression. The schemes in the second category interleave DCT transformed blocks. In this case, in the absence of ARQ, if a packet is lost, an entire block may be lost thus yielding poor image quality and making the error concealment task difficult. Interleaving transform coefficients is tricky and error concealment in the presence of lost coefficients is challenging. In this paper, we develop three different interleaving schemes, namely Triangular, Quadrant, and Coefficient, that interleave frequency domain transform coefficients. The transform coefficients within each block are divided into small groups and groups are interleaved with the groups from other blocks in the image, hence they are referred to as inter-block interleaving schemes. The proposed schemes differ in terms of group size. In the Triangular interleaving scheme AC coefficients in each block are divided into two triangles and interleaving is performed among triangles from different blocks. In the Quadrant interleaving scheme, coefficients in each block are divided into four quadrants and quadrants are interleaved. In the Coefficient interleaving scheme, each coefficient in a block is a group and it is interleaved with the coefficients in other blocks. The compression ratio 3 of the proposed interleaving schemes is impressive ranging from 90 to 98% of the JPEG standard compression while providing much higher robustness in the presence of correlated losses. We also propose two new variable end-of-block (VEOB) techniques, one based on the number of AC coefficients per block (VAC-EOB) and the other based on the number of bits per block (VB–EOB). Our proposed interleaving techniques combined with VEOB schemes yield significantly better compression ratios compared to JPEG (2–11%) and MPEG-2 (3–6.7%) standards while at the same time improve the resilience of the coded data in the presence of transmission errors.  相似文献   

20.
为了探索虚拟绘制视点之间的强相关性, 提高光场图像的压缩效率, 提出一种基于视点相关性的光场图像压缩算法。该算法基于高清视频编码屏幕内容编码扩展平台, 利用线性加权算法以及帧内块拷贝混合预测算法来提升编码块的预测精度; 并利用率失真优化过程来自适应地选择最优的编码块大小以及预测模式。结果表明, 所提算法相比于高清视频编码标准可以获得2.55dB的平均BD-峰值信噪比编码增益, 同时可以获得较好的虚拟视点绘制质量。该算法充分利用虚拟绘制视点之间的强相关性, 提高了光场图像的编码效率。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号