首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 546 毫秒
1.
In this paper, we present a fast codebook re-quantization algorithm (FCRA) using codewords of a codebook being re-quantized as the training vectors to generate the re-quantized codebook. Our method is different from the available approach, which uses the original training set to generate a re-quantized codebook. Compared to the traditional approach, our method can reduce the computing time dramatically, since the number of codewords of a codebook being re-quantized is usually much smaller than the number of original training vectors. Our method first classifies codewords of a re-quantized codebook into static and active groups. This approach uses the information of codeword displacements between successive partitions to reject impossible candidates in the partition process of codebook re-quantization. By implementing a fast search algorithm used for vector quantization encoding (MFAUPI) in the partition step of FCRA, the computational complexity of codebook re-quantization can be further reduced significantly. Using MFAUPI, the computing time of FCRA can be reduced by a factor of 1.55–3.78. Compared with the available approach OARC (optimization algorithm for re-quantization codebook), our proposed method can reduce the codebook re-quantization time by a factor of about 8005 using a training set of six real images. This reduction factor is increased when the re-quantized codebook size and/or training set size are increased. It is noted that our proposed algorithm can generate the same re-quantized codebook as that produced by the OARC.  相似文献   

2.
In this paper, we present a fast codebook generation algorithm called CGAUCD (Codebook Generation Algorithm Using Codeword Displacement) by making use of the codeword displacement between successive partition processes. By implementing a fast search algorithm named MFAUPI (Modified Fast Algorithm Using Projection and Inequality) for VQ encoding in the partition step of CGAUCD, the codebook generation time can be further reduced significantly. Using MFAUPI, the computing time of CGAUCD can be reduced by a factor of 4.7–7.6. Compared to Generalized Lloyd Algorithm (GLA), our proposed method can reduce the codebook generation time by a factor of 35.9–121.2. Compared to the best codebook generation algorithm to our knowledge, our approach can further reduce the corresponding computing time by 26.0–32.8%. It is noted that our proposed algorithm can generate the same codebook as that produced by the GLA. The superiority of our method is more remarkable when a larger codebook is generated.  相似文献   

3.
基于自组织特征映射神经网络的矢量量化   总被引:7,自引:0,他引:7       下载免费PDF全文
近年来,许多学者已经成功地将Kohonen的自组织特征映射(SOFM)神经网络应用于矢量量化(VQ)图象压缩编码,相对于传统的KLBG算法,基于的SOFM算法的两个主要缺点是计算量大和生成的码书性能较差因此为了改善码书性能,对基本的SOFM算法的权值调整方法作了一些改进,同时为了降低计算量,又在决定获得胜神经元的过程中,采用快速搜索算法,在将改进的算法用于矢量量化码书设计后,并把生成的码书用于图象  相似文献   

4.
Vector quantization has been widely employed in nearest neighbor search because it can approximate the Euclidean distance of two vectors with the table look-up way that can be precomputed. Additive quantization (AQ) algorithm validated that low approximation error can be achieved by representing each input vector with a sum of dependent codewords, each of which is from its own codebook. However, the AQ algorithm relies on computational expensive beam search algorithm to encode each vector, which is prohibitive for the efficiency of the approximate nearest neighbor search. In this paper, we propose a fast AQ algorithm that significantly accelerates the encoding phase. We formulate the beam search algorithm as an optimization of codebook selection orders. According to the optimal order, we learn the codebooks with hierarchical construction, in which the search width can be set very small. Specifically, the codewords are firstly exchanged into proper codebooks by the indexed frequency in each step. Then the codebooks are updated successively to adapt the quantization residual of previous quantization level. In coding phase, the vectors are compressed with learned codebooks via the best order, where the search range is considerably reduced. The proposed method achieves almost the same performance as AQ, while the speed for the vector encoding phase can be accelerated dozens of times. The experiments are implemented on two benchmark datasets and the results verify our conclusion.  相似文献   

5.
In this paper, we develop a method to lower the computational complexity of pairwise nearest neighbor (PNN) algorithm. Our approach determines a set of candidate clusters being updated after each cluster merge. If the updating process is required for some of these clusters, k-nearest neighbors are found for them. The number of distance calculations for our method is O(N2), where N is the number of data points. To further reduce the computational complexity of the proposed algorithm, some available fast search approaches are used. Compared to available approaches, our proposed algorithm can reduce the computing time and number of distance calculations significantly. Compared to FPNN, our method can reduce the computing time by a factor of about 26.8 for the data set from a real image. Compared with PMLFPNN, our approach can reduce the computing time by a factor of about 3.8 for the same data set.  相似文献   

6.
讨论了在语音编码中,应用神经网络技术进行矢量量化的算法。神经网络矢量量化算法可以压缩码本维数,提高码本搜索速度,从而优化矢量量化的效果。将这种优化的矢量量化算法应用于语音编码中,能降低运算复杂度,提高编码质量。  相似文献   

7.
码书生成是基于矢量量化压缩体绘制的关键之一。在码书生成中,初始码书对码书生成算法有较大的影响。现有的码书初始化方法需要对原始海量数据进行多次迭代,数据频繁在硬盘、内存和GPU(图形处理器)之间进行数据传输,导致算法效率不高。本文针对码书生成的初始码书提取问题,提出了基于数据流聚类策略的初始码书生成算法。其基本思想是将海量三维数据体当作一个数据流(分块),对每一部分数据形成局部码书,再对所有的局部码书进行分类形成最终的初始码书。利用本方法可以极大的减少数据的读取和传输的次数,同时,充分利用GPU并行计算能力。通过仿真结果分析表明,本文提出的方法在效率上和效果上都有较大的提高。  相似文献   

8.
图像经过矢量量化后得到的索引图具有很强的统计相关性,从而使得邻近块的索引以较大的概率相等或偏移量较小。按照某种准则对码书进行排序,可以有效增强索引之间的相关性。基于平方欧几里得距离提出一种新的码书按距离排序方法。与传统的按均值、方差和能量等排序方法相比,距离排序能大大提高索引图的相关性,使索引之间的偏移量向值小的方向明显集中。将距离排序后的码书用于AICS(adaptive index coding scheme)算法,实现了更好的压缩性能。  相似文献   

9.
改进的分形矢量量化编码   总被引:1,自引:0,他引:1  
为了提高图象的分形矢量量化编码效果,在利用四叉树对图象进行自适应分割的基础上,基于正交基三维分量投影准则,提出了图象块非平面近似方法,进而形成一种新的静态图象分形矢量量化编码方法。该方法首先通过对投影参数进行DPCM编码来构造粗糙图象,然后由此来构成差值图象编码的码书。由于该方法把分形和矢量量化编码结合起来,因此解码时只需查找码书,并仅进行对比度变换。计算机编、解码实验结果表明,该编码方法具有码书不需外部训练,解码也不需迭代等优点,且与其他同类编码器相比,该方法在压缩比和恢复图象质量(PSRN)方面均有明显改善。  相似文献   

10.
In this paper, a new multi stage vector quantization with energy clustered training set is proposed for color image coding. The input image is applied with orthogonal polynomials based transformation and the energy clustered transformed training vectors are obtained with reduced dimension. The stage-by-stage codebook for vector quantization is constructed from the proposed transformed training vectors so as to reduce computational complexity. This method also generates a single codebook for all the three color components, utilizing the inter-correlation property of individual color planes and interactions among the color planes due to the proposed transformation. As a result, the color image encoding time is only slightly higher than that of gray scale image coding time and in contrast to the existing color image coding techniques, whose time is thrice greater than that of gray scale image coding. The experimental results reveal that only 35 % and 10 % of transform coefficients are sufficient for smaller and larger blocks respectively, for the reconstruction of images with good quality. The proposed multi stage vector quantization technique is faster when compared to existing techniques and yields better trade-off between image quality and block size for encoding.  相似文献   

11.
In this paper, we present a fast global k-means clustering algorithm by making use of the cluster membership and geometrical information of a data point. This algorithm is referred to as MFGKM. The algorithm uses a set of inequalities developed in this paper to determine a starting point for the jth cluster center of global k-means clustering. Adopting multiple cluster center selection (MCS) for MFGKM, we also develop another clustering algorithm called MFGKM+MCS. MCS determines more than one starting point for each step of cluster split; while the available fast and modified global k-means clustering algorithms select one starting point for each cluster split. Our proposed method MFGKM can obtain the least distortion; while MFGKM+MCS may give the least computing time. Compared to the modified global k-means clustering algorithm, our method MFGKM can reduce the computing time and number of distance calculations by a factor of 3.78-5.55 and 21.13-31.41, respectively, with the average distortion reduction of 5,487 for the Statlog data set. Compared to the fast global k-means clustering algorithm, our method MFGKM+MCS can reduce the computing time by a factor of 5.78-8.70 with the average reduction of distortion of 30,564 using the same data set. The performances of our proposed methods are more remarkable when a data set with higher dimension is divided into more clusters.  相似文献   

12.
一种新的矢量量化编码算法   总被引:1,自引:0,他引:1  
矢量量化是低位率图像压缩非常有效的一种方法 ,矢量量化基本方法的一个关键问题是需要较长的编码时间 ,尤其对于高维矢量或大的码书 .提出了一种基于 1/ 2 L2 -范数金字塔数据结构的快速编码算法 ,明显加快了编码过程 ,减少了实际对存储器的需求 ,特别对高维矢量和大的码书效果更显著 ,同时保持与全搜索方法相同的编码质量  相似文献   

13.
Pairwise-nearest-neighbor (PNN) is an effective method of data clustering, which can always generate good clustering results, but with high computational complexity. Fast exact PNN (FPNN) algorithm proposed by Fränti et al. is an effective method to speed up PNN and generates the same clustering results as those generated by PNN. In this paper, we present a novel method to improve the FPNN algorithm. Our algorithm uses the property that the cluster distance increases as the cluster merge process proceeds and adopts a fast search algorithm to reject impossible candidate clusters. Experimental results show that our proposed method can effectively reduce the number of distance calculations and computation time of FPNN algorithm. Compared with FPNN, our proposed approach can reduce the computation time and number of distance calculations by a factor of 24.8 and 146.4, respectively, for the data set from three real images. It is noted that our method generates the same clustering results as those produced by PNN and FPNN.  相似文献   

14.
基于方向性的VQ分类编码算法   总被引:2,自引:2,他引:0  
该文提出了一种基于图像块方向性的分类码书的生成方法,并提出与之相适应的分类编码算法。实验结果表明:用分类码书配合分类编码算法对图像进行编码可以大幅提高图像编码速度。在PSNR仅降低1.8%的情况下,编码速度平均提高38.4%。最高可以提高45.8%。  相似文献   

15.
一种基于小波变换的图像压缩方法   总被引:8,自引:0,他引:8  
提出一种基于小波变换的灰度图像数据压缩编码方法,基本思路是利用小波变换实现图像的多分辨分解,用矢量量化(VQ)对分解后的图像进行编码,在矢量量化LBG算法的初始码书的选取中根据矢量中各分量的特性提出一种改进的随机选取法,避免了可能的胞腔不均现象,提高了码书的质量,而且重构的图像质量也有所提高。  相似文献   

16.
码书排序对快速码字搜索算法性能影响的分析   总被引:1,自引:1,他引:0       下载免费PDF全文
矢量量化快速码字搜索算法中,为了有效地减小搜索范围,必须对原始码书按一定的准则进行重新排序。对现存的两类快速码字搜索算法进行了总结,其中一类是码书按1维顺序关系排序,另一类是码书按2维相邻关系排序。通过实验给出了两类算法的搜索范围和编码时间,并进行了比较和分析,进而提出了在实际编码时如何更好地使用这两种排序关系的准则。  相似文献   

17.
在编码前,首先计算码书中所有码字在主轴上的投影值,然后按照这些投影值从小到大对码字进行排序;在编码过程中,利用邻近图像块的高度相关性和当前输人矢量在主轴上的投影值共同确定相应的码字搜索范围.实验结果表明,与传统穷尽搜索矢量量化编码法相比,虽然文中算法的编码质量略有下降,但编码速度和压缩效率都有了显著的提高.  相似文献   

18.
We describe an implementation of a vector quantization codebook design algorithm based on the frequencysensitive competitive learning artificial neural network. The implementation, designed for use on high-performance computers, employs both multitasking and vectorization techniques. A C version of the algorithm tested on a CRAY Y-MP8/864 is discussed. We show how the implementation can be used to perform vector quantization, and demonstrate its use in compressing digital video image data. Two images are used, with various size codebooks, to test the performance of the implementation. The results show that the supercomputer techniques employed have significantly decreased the total execution time without affecting vector quantization performance.This work was supported by a Cray University Research Award and by NASA Lewis research grant number NAG3-1164.  相似文献   

19.
为解决采用矢量量化的方法进行说话人识别时出现的失真问题,根据汉语语音的发音特性,提出了将矢量量化与语音特征的聚类技术相结合的方法,在进行矢量量化码书训练之前,先对特征矢量进行聚类筛选。实验结果表明,当测试语音片段长度为4 s时,在保持95%左右识别率下,采用普通矢量量化方法需64码本数,而采用该文方法只需8码本数,降低了8倍。结果说明该方法不但在一定程度上解决了因训练样本不足而引起的失真问题,而且通过方法的改进,实现了采用较低码字数产生较好的识别结果,从而提高识别效率。  相似文献   

20.
基于DCT子空间失真测度的快速矢量编码算法   总被引:1,自引:0,他引:1  
周汀  章倩苓 《计算机学报》1997,20(5):421-426
在本文中,我们介绍了一种基于离散余弦变换子空间失真测度的恢复速失量编码算法。该算法利用DCT子空间映射,将失真测度维数从16降至4,从而使编码计算复杂度隆为1/4,并且结合部分失真算法进一步减少了编码 计算复杂度。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号