共查询到20条相似文献,搜索用时 140 毫秒
1.
为了改进分形图像压缩编码过程耗时过长而影响实用的问题,新定义了子域对角和来描述图像块的特征.算法把码本按照子域对角和特征的大小排序,对每个待编码的Range块,仅在赋序码本中找到与Range块的子域对角和的数值最接近的Domain块,并在此Domain块的邻域内搜索最佳匹配块.实验结果表明,在保证解码图像质量的前提下,该算法较快地提高了编码速度. 相似文献
2.
为了提高分形图像压缩编码的速度,针对在基本分形图像压缩算法中值域块编码匹配搜索时需要对变换后的定义域块一一对应,导致编码时间较长的缺点,提出了一种基于菱形搜索算法的分形图像压缩编码新算法.菱形搜索算法是一种运动估计的快速搜索算法,主要过程是在所有的候选块中搜索当前块的最优匹配块.通过运用菱形搜索算法中的大小菱形模板进行匹配搜索,实验证明文中算法在提高编码速度和降低编码复杂度是有效的. 相似文献
3.
由于当前分形图像搜索编码都忽略了对比度因子约束,且对于负载性较高的图像,其压缩比较低,继而降低了解码图像质量以及计算效率。对此,构造了限制D块搜索范围耦合极点插值复原技术的分形图像压缩算法。引入对比度因子约束,设计了一种限制D块搜索范围的编码机制。并基于皮亚诺扫描,提出了极点插值复原技术。测试算法性能,结果显示:与当前的分形图像压缩算法相比,文中算法的压缩效果优异,其平均PSNR提高了ldB~3dB,且编码速度提高了约11.6倍,可满足实时性,在压缩比越高时,提高越明显。 相似文献
4.
分形图像压缩根据图像特有的自相似性,利用压缩仿射变换消除图像数据冗余度,进而实现图像压缩,实现较高的压缩比。然而,分形图像压缩编码具有计算复杂度高、运行时间过长的致命缺点,对于图像信息量巨大的当今社会来说不具有实用性。为解决基本分形压缩编码耗时过长的问题,提出了子块均点特征分形压缩编码算法,利用该算法将基本分形压缩编码的全搜索转为局部搜索,限定搜索范围,减少定义域块的搜索,在客观质量稍作牺牲的基础上加快了编码速度。将所提算法分别与五点和特征算法、1-范数特征算法、欧式比特征算法以及双交叉算法进行比较,仿真结果表明,在时间稍逊的情况下,所提算法在客观质量(Peak Signal-to-Noise Ratio,PSNR)上更优。 相似文献
5.
娄莉 《微电子学与计算机》2006,23(5):66-68
文章提出了一种基于邻域搜索匹配的分形编码改进算法.在信噪比相近时,它比Jacqnin的分形块编码的压缩比高,编码速度快.实验结果表明,该分形编码压缩比可达到15.28,峰值信噪比可达30.35dB.在PC586上,编码时间为196秒,解码时间为36秒. 相似文献
6.
基于转动惯量的快速分形图像编码算法 总被引:1,自引:1,他引:0
针对基本分形编码方法编码时间过长的问题,基于新定义的图像块的转动惯量特征的概念提出一种基于转动惯量的快速算法.算法对码本按转动惯量大小赋序,再对每个需要匹配的Range块在赋序码本中寻找与它的转动惯量最接近的码块,即只在这个码块的邻域内搜索Range块的最佳匹配块.实验结果表明,较之基本分形编码算法,该算法编码时间大大缩短,图像质量没有受到太大影响并且优于形态特征的特征算法. 相似文献
7.
目前,分形图像编码技术最主要的缺点仍然是编码时间太长.因此,如何提高分形图像编码速度成为当前分形图像编码技术的研究热点.本文从分形图像编码通用公式推导出一个不等式,利用此不等式,可以预先排除大量不可能与值域块匹配的定义域块,从而减少值域块与定义域块的匹配计算,以此达到缩短编码时间的目的.实验结果表明,在解码图像质量基本不变情况下,本文的方法所使用的编码时间比Fisher方案所需的编码时间减少了很多. 相似文献
8.
9.
将分形编码和骑士巡游相结合,提出一种基于分形和骑士巡游的图像压缩加密算法。首先用骑士巡游产生的路径作为密钥,路径用矩阵表示,矩阵中的每一元素与图像分形编码中的每个值域块相对应。然后按照骑士巡游的路径,从某个元素开始依照某个步长的顺序进行分形编码,为保证图像质量可以进行四叉树分裂。解码是其逆过程。用MATLAB对该算法进行仿真实验,测试了置乱度、密钥敏感性,在保证一定解码图像质量的情况下,压缩比优于JPEG。 相似文献
10.
一种改进的分形图像压缩算法 总被引:1,自引:0,他引:1
为了缩短分形编码时间,通过对图像定义域块和值域块的统计特性分析,提出了改进的分形压缩算法。设计最优匹配定义域块自适应搜索方法,缩短搜索范围;用值域块均值代替灰度偏移量,减少计算量。实验证明,运用这种改进的分形图像压缩算法进行图像压缩,在保持较高的解压图像质量的情况下,大幅缩短图像压缩编码时间。 相似文献
11.
运动估计是视频编码中最重要且最耗时的一部分,它占用整个视频编码60%~80%的时间.研究高效的、快速的运动估计算法是目前视频压缩技术中的重要研究课题.基于H.264视频编码标准,选择x264作为测试编码器,分析了x264的4种运动估计算法,通过加入非对称小菱形搜索,降低搜索点数,部分算法优化,对非对称十字型多层次六边形格点搜索算法(UMHexagonS)进行了改进,提高了运动估计算法效率.提出了非对称十字型多层次八边形格点搜索(x264_ME_UMO)算法.通过对各种视频序列的测试表明,在基本保持原有编码性能和图像质量的情况下,优化后的算法编码速度平均提高了约17%,能更好地满足实际应用的需求. 相似文献
12.
Soo-Chang Pei Chien-Cheng Tseng Ching-Yung Lin 《IEEE transactions on image processing》1996,5(3):411-415
Iterated function systems (IFSs) have received great attention in encoding and decoding fractal images. Barnsley (1988) has shown that IFSs for image compression can achieve a very high compression ratio for a single image. However, the major drawback of such a technique is the large computation load required to both encode and decode a fractal image. We provide a novel algorithm to decode IFS codes. The main features of this algorithm are that it is very suitable for parallel implementation and has no transient behavior. Also, from the decoding process of this method we can understand the encoding procedure explicitly. One example is illustrated to demonstrate the quality of its performance. 相似文献
13.
An image coding algorithm, Progressive Resolution Coding (PROGRES), for a high-speed resolution scalable decoding is proposed. The algorithm is designed based on a prediction of the decaying dynamic ranges of wavelet subbands. Most interestingly, because of the syntactic relationship between two coders, the proposed method costs an amount of bits very similar to that used by uncoded (i.e., not entropy coded) SPIHT. The algorithm bypasses bit-plane coding and complicated list processing of SPIHT in order to obtain a considerable speed improvement, giving up quality scalability, but without compromising coding efficiency. Since each tree of coefficients is separately coded, where the root of the tree corresponds to the coefficient in LL subband, the algorithm is easily extensible to random access decoding. The algorithm is designed and implemented for both 2-D and 3-D wavelet subbands. Experiments show that the decoding speeds of the proposed coding model are four times and nine times faster than uncoded 2-D-SPIHT and 3-D-SPIHT, respectively, with almost the same decoded quality. The higher decoding speed gain in a larger image source validates the suitability of the proposed method to very large scale image encoding and decoding. In the Appendix, we explain the syntactic relationship of the proposed PROGRES method to uncoded SPIHT, and demonstrate that, in the lossless case, the bits sent to the codestream for each algorithm are identical, except that they are sent in different order. 相似文献
14.
15.
In the clustering analysis based on swarm intelligence optimization algorithm,the most of encoding method only used single form,and this method might be limit range of search space,the algorithm was easy to fall into local op-timum.In order to solve this problem,image clustering algorithm of hybrid encoding (HEICA) was proposed.Firstly,a hybrid encoding model based on image clustering was established,this method could expand the scope of the search space.Meanwhile,it was combined with two optimization algorithms which improved rain forest algorithm (IRFA) and quantum particle swarm optimization (QPSO),this method could improve the global search capability.In the simulation experiment,it was carried out to illustrate the performance of the proposed method based on four datasets.Compared with results form four measured cluster algorithm.The experimental results show that the algorithm has strong global search capability,high stability and clustering effect. 相似文献
16.
17.
18.
Byung Cheol Song Jong Beom Ra 《IEEE transactions on image processing》2002,11(1):10-15
Vector quantization for image compression requires expensive encoding time to find the closest codeword to the input vector. This paper presents a fast algorithm to speed up the closest codeword search process in vector quantization encoding. By using an appropriate topological structure of the codebook, we first derive a condition to eliminate unnecessary matching operations from the search procedure. Then, based on this elimination condition, a fast search algorithm is suggested. Simulation results show that with little preprocessing and memory cost, the proposed search algorithm significantly reduces the encoding complexity while maintaining the same encoding quality as that of the full search algorithm. It is also found that the proposed algorithm outperforms the existing search algorithms. 相似文献
19.