首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
矢量量化的初始码书算法   总被引:2,自引:0,他引:2       下载免费PDF全文
矢量量化的初始码书设计是很重要的,影响或决定着其后码书形成算法的迭代次数和最终的码书质量。针对原有的初始码书算法在性能上随机性强与信源匹配程度不高的问题,提出一种对于训练矢量实施基于分量的和值排序,然后做分离平均的初始码书形成算法。算法使用了矢量的特征量,脱离了对于图像结构因数的依赖,能产生鲁棒性较好的初始码书。实验证明了该方法的有效性,与LBG算法结合可进一步提高码书质量。  相似文献   

2.

Vector quantization (VQ) is a very effective way to save bandwidth and storage for speech coding and image coding. Traditional vector quantization methods can be divided into mainly seven types, tree-structured VQ, direct sum VQ, Cartesian product VQ, lattice VQ, classified VQ, feedback VQ, and fuzzy VQ, according to their codebook generation procedures. Over the past decade, quantization-based approximate nearest neighbor (ANN) search has been developing very fast and many methods have emerged for searching images with binary codes in the memory for large-scale datasets. Their most impressive characteristics are the use of multiple codebooks. This leads to the appearance of two kinds of codebook: the linear combination codebook and the joint codebook. This may be a trend for the future. However, these methods are just finding a balance among speed, accuracy, and memory consumption for ANN search, and sometimes one of these three suffers. So, finding a vector quantization method that can strike a balance between speed and accuracy and consume moderately sized memory, is still a problem requiring study.

  相似文献   

3.
提出一种用于矢量量化压缩图像的安全数据隐藏方案.为降低数据嵌入引入的失真,以码字间的矢量均方差为优化指标,采用遗传算法实现码本的优化分割,并提出基于码本分割的数据嵌入算法.采用基于自适应算术熵解码的数据映射方法,实现了嵌入前后统计特性的保持.实验结果表明,所提出的算法在容量、失真水平和安全性方面具有较好的综合性能.  相似文献   

4.
Vector quantization has been widely employed in nearest neighbor search because it can approximate the Euclidean distance of two vectors with the table look-up way that can be precomputed. Additive quantization (AQ) algorithm validated that low approximation error can be achieved by representing each input vector with a sum of dependent codewords, each of which is from its own codebook. However, the AQ algorithm relies on computational expensive beam search algorithm to encode each vector, which is prohibitive for the efficiency of the approximate nearest neighbor search. In this paper, we propose a fast AQ algorithm that significantly accelerates the encoding phase. We formulate the beam search algorithm as an optimization of codebook selection orders. According to the optimal order, we learn the codebooks with hierarchical construction, in which the search width can be set very small. Specifically, the codewords are firstly exchanged into proper codebooks by the indexed frequency in each step. Then the codebooks are updated successively to adapt the quantization residual of previous quantization level. In coding phase, the vectors are compressed with learned codebooks via the best order, where the search range is considerably reduced. The proposed method achieves almost the same performance as AQ, while the speed for the vector encoding phase can be accelerated dozens of times. The experiments are implemented on two benchmark datasets and the results verify our conclusion.  相似文献   

5.
This article develops an evolutional fuzzy particle swarm optimization (FPSO) learning algorithm to self extract the near optimum codebook of vector quantization (VQ) for carrying on image compression. The fuzzy particle swarm optimization vector quantization (FPSOVQ) learning schemes, combined advantages of the adaptive fuzzy inference method (FIM), the simple VQ concept and the efficient particle swarm optimization (PSO), are considered at the same time to automatically create near optimum codebook to achieve the application of image compression. The FIM is known as a soft decision to measure the relational grade for a given sequence. In our research, the FIM is applied to determine the similar grade between the codebook and the original image patterns. In spite of popular usage of Linde–Buzo–Grey (LBG) algorithm, the powerful evolutional PSO learning algorithm is taken to optimize the fuzzy inference system, which is used to extract appropriate codebooks for compressing several input testing grey-level images. The proposed FPSOVQ learning scheme compared with LBG based VQ learning method is presented to demonstrate its great result in several real image compression examples.  相似文献   

6.
The vector quantization (VQ) was a powerful technique in the applications of digital image compression. The traditionally widely used method such as the Linde–Buzo–Gray (LBG) algorithm always generated local optimal codebook. Recently, particle swarm optimization (PSO) is adapted to obtain the near-global optimal codebook of vector quantization. An alternative method, called the quantum particle swarm optimization (QPSO) had been developed to improve the results of original PSO algorithm. In this paper, we applied a new swarm algorithm, honey bee mating optimization, to construct the codebook of vector quantization. The results were compared with the other three methods that are LBG, PSO–LBG and QPSO–LBG algorithms. Experimental results showed that the proposed HBMO–LBG algorithm is more reliable and the reconstructed images get higher quality than those generated from the other three methods.  相似文献   

7.
In this paper an adaptive hierarchical algorithm of vector quantization for image coding is proposed. First the basic codebook is generated adaptively, then the codes are coded into higher-level codes by creating an index codebook using the redundance presented in the codes. This hierarchical scheme lowers the bit rate significantly and causes little more computation and no more distortion than the single-layer adaptive VQ algorithm does which is used to create the basic codebook.  相似文献   

8.
Multistage vector quantization (MSVQ) and their variants have been recently proposed. Before MSVQ is designed, the user must artificially determine the number of codewords in each VQ stage. However, the users usually have no idea regarding the number of codewords in each VQ stage, and thus doubt whether the resulting MSVQ is optimal. This paper proposes the genetic design (GD) algorithm to design the MSVQ. The GD algorithm can automatically find the number of codewords to optimize each VQ stage according to the rate–distortion performance. Thus, the MSVQ based on the GD algorithm, namely MSVQ(GD), is proposed here. Furthermore, using a sharing codebook (SC) can further reduce the storage size of MSVQ. Combining numerous similar codewords in the VQ stages of MSVQ produces the codewords of the sharing codebook. This paper proposes the genetic merge (GM) algorithm to design the SC of MSVQ. Therefore, the constrained-storage MSVQ using a SC, namely CSMSVQ, is proposed and outperforms other MSVQs in the experiments presented here.  相似文献   

9.
一种基于小波变换的图像压缩方法   总被引:8,自引:0,他引:8  
提出一种基于小波变换的灰度图像数据压缩编码方法,基本思路是利用小波变换实现图像的多分辨分解,用矢量量化(VQ)对分解后的图像进行编码,在矢量量化LBG算法的初始码书的选取中根据矢量中各分量的特性提出一种改进的随机选取法,避免了可能的胞腔不均现象,提高了码书的质量,而且重构的图像质量也有所提高。  相似文献   

10.
自适应补偿矢量量化   总被引:2,自引:0,他引:2  
提出了一种基于LBG码书设计的新的图像矢量量化算法。该算法利用图像信号在正交矢量空间中的能量集中性,有效地减小了码书的搜索范围,加快了矢量量化速度。同时利用原始图像和重建图像之间的差值进行了自适应补偿,在保证较高压缩比的同时,有效地克服了矢量量化的致命缺陷,即重建图像存在严重的方块效应。  相似文献   

11.
In this paper, we present an approach to efficiently hide sensitive data in vector quantization (VQ) indices and reversibly extract sensitive data from encrypted code stream. The approach uses two patterns to compress VQ indices. When an index is equal to its upper neighbor’s index or left neighbor’s index, it is encoded by the corresponding equivalent index; otherwise, it is encoded by a modified VQ codebook mapping named as hierarchical state codebook mapping (HSCM). In the proposed scheme, the hierarchical state codebook mapping (HSCM) is main coding pattern and it is generated according to the side-match distortion method(SMD). By the above two patterns, the size of original code stream is reduced, and more storage space can be used to embed sensitive data. The experimental results indicated that the proposed scheme can achieve a higher embedding capacity than the previous state-of-the-art VQ-index-based data hiding methods.  相似文献   

12.
《Parallel Computing》2002,28(7-8):1079-1093
Vector quantization (VQ) is a widely used algorithm in speech and image data compression. One of the problems of the VQ methodology is that it requires large computation time especially for large codebook size. This paper addresses two issues. The first deals with the parallel construction of the VQ codebook which can drastically reduce the training time. A master/worker parallel implementation of a VQ algorithm is proposed. The algorithm is executed on the DM-MIMD Alex AVX-2 machine using a pipeline architecture. The second issue deals with the ability of accurately predicting the machine performance. Using communication and computation models, a comparison between expected and real performance is carried out. Results show that the two models can accurately predict the performance of the machine for image data compression. Analysis of metrics normally used in parallel realization is conducted.  相似文献   

13.
Recently, vector quantization (VQ) has received considerable attention, and has become an effective tool for image compression. It provides a high compression ratio and a simple decoding process. However, studies on the practical implementation of VQ have revealed some major difficulties such as edge integrity and codebook design efficiency. After reviewing the state-of-the-art in the field of vector quantization, we focus on iterative and non-iterative codebook generation algorithms.  相似文献   

14.
Vector quantization (VQ) for image compression requires expensive time to find the closest codevector in the encoding process. In this paper, a fast search algorithm is proposed for projection pyramid vector quantization using a lighter modified distortion with Hadamard transform of the vector. The algorithm uses projection pyramids of the vectors and codevectors after applying Hadamard transform and one elimination criterion based on deviation characteristic values in the Hadamard transform domain to eliminate unlikely codevectors. Experimental results are presented on image block data. These results confirm the effectiveness of the proposed algorithm with the same quality of the image as the full search algorithm.  相似文献   

15.
To enhance the traditional vector quantisation (VQ) system by adding the watermarking ability, a digital image watermarking scheme, which modifies the VQ indices to carry watermark bits, is presented. This scheme partitions the main codebook into two sub-codebooks by referring to the user-key. Then, for each input vector of the cover image, a sub-codebook is selected according to the watermark bit to be embedding. The traditional VQ coding procedure is then done using the sub-codebook for the vector. Furthermore, to improve the performance of the scheme, a genetic codebook partition (GCP) procedure, which employs the genetic algorithm (GA) to find a better way to split the codebook, is proposed. It is demonstrated that the proposed methods provide faster encoding time, better imperceptibility, stronger robustness under some common attacks, and easier implementation than some related VQ-based watermarking schemes proposed in the literature.  相似文献   

16.

In this paper, we propose a reversible data hiding scheme that exploits the centroid formula. Specifically, we use it to define a centroid boundary vector and a centroid state codebook CSCB. Initially, our centroid boundary vectors and CSCBs are the same as the side match vector quantization (SMVQ) algorithm’s boundary vectors and state codebooks SCBs. For each VQ index, the proposed scheme exploits the centroid formula to update its centroid boundary vector and the corresponding CSCB. The updating is coupled with a heuristic to select the best state codebook (i.e., either SCB or CSCB) for each VQ index, which generates a highly compressible distribution of index values. Our experimental results show that the proposed scheme can embed n = 1, 2, 3, and 4 bit per index (bpi) at bit rates of 0.332, 0.394, 0.457, and 0.519 bit per pixel (bpp), respectively, for the main codebook size N = 256. These results confirm that the proposed scheme improves recent VQ and SMVQ based reversible data hiding schemes.

  相似文献   

17.
一种基于Tabu搜索的模糊学习矢量量化图像编码算法   总被引:1,自引:0,他引:1       下载免费PDF全文
模糊学习矢量量化算法(FLVQ)虽然解决了硬的竞争学习对初始码本的依赖性问题,但收敛速度变慢,且仍无法克服陷入局部最小。为此在分析模糊学习矢量量化图象编码原理的基础上,探讨了FLVQ算法的几种优化途径,进而进出了一种基于Tabu搜索(TS)的模糊学习矢量量化的新算法(TS-FLVQ),并给出了该算法的具体实现方法及步骤。该算法首先利用TS技术产生一个面向全局搜索的寻优列表,然后再进行模糊学习以得到最优解,实验结果表明,该算法在收敛速度及编码效果上均较FLVQ有较大的提高。。  相似文献   

18.
This paper proposes two co-adaptation schemes of self-organizing maps that incorporate the Kohonen's learning into the GA evolution in an attempt to find an optimal vector quantization codebook of images. The Kohonen's learning rule used for vector quantization of images is sensitive to the choice of its initial parameters and the resultant codebook does not guarantee a minimum distortion. To tackle these problems, we co-adapt the codebooks by evolution and learning in a way that the evolution performs the global search and makes inter-codebook adjustments by altering the codebook structures while the learning performs the local search and makes intra-codebook adjustments by making each codebook's distortion small. Two kinds of co-adaptation schemes such as Lamarckian and Baldwin co-adaptation are considered in our work. Simulation results show that the evolution guided by a local learning provides the fast convergence, the co-adapted codebook produces better reconstruction image quality than the non-learned equivalent, and Lamarckian co-adaptation turns out more appropriate for the VQ problem.  相似文献   

19.
模糊学习矢量量化算法 (FL VQ)虽然解决了硬的竞争学习对初始码本的依赖性问题 ,但收敛速度变慢 ,且仍无法克服陷入局部最小 .为此在分析模糊学习矢量量化图象编码原理的基础上 ,探讨了 FL VQ算法的几种优化途径 ,进而提出了一种基于 Tabu搜索 (TS)的模糊学习矢量量化的新算法 (TS- FL VQ) ,并给出了该算法的具体实现方法及步骤 .该算法首先利用 TS技术产生一个面向全局搜索的寻优列表 ,然后再进行模糊学习以得到最优解 .实验结果表明 ,该算法在收敛速度及编码效果上均较 FL VQ有较大的提高 .  相似文献   

20.
An unsupervised competitive neural network for efficient clustering of Gaussian probability density function (GPDF) data of continuous density hidden Markov models (CDHMMs) is proposed in this paper. The proposed unsupervised competitive neural network, called the divergence-based centroid neural network (DCNN), employs the divergence measure as its distance measure and utilizes the statistical characteristics of observation densities in the HMM for speech recognition problems. While the conventional clustering algorithms used for the vector quantization (VQ) codebook design utilize only the mean values of the observation densities in the HMM, the proposed DCNN utilizes both the mean and the covariance values. When compared with other conventional unsupervised neural networks, the DCNN successfully allocates more code vectors to the regions where GPDF data are densely distributed while it allocates fewer code vectors to the regions where GPDF data are sparsely distributed. When applied to Korean monophone recognition problems as a tool to reduce the size of the codebook, the DCNN reduced the number of GPDFs used for code vectors by 65.3% while preserving recognition accuracy. Experimental results with a divergence-based k-means algorithm and a divergence-based self-organizing map algorithm are also presented in this paper for a performance comparison.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号