首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This article develops an evolutional fuzzy particle swarm optimization (FPSO) learning algorithm to self extract the near optimum codebook of vector quantization (VQ) for carrying on image compression. The fuzzy particle swarm optimization vector quantization (FPSOVQ) learning schemes, combined advantages of the adaptive fuzzy inference method (FIM), the simple VQ concept and the efficient particle swarm optimization (PSO), are considered at the same time to automatically create near optimum codebook to achieve the application of image compression. The FIM is known as a soft decision to measure the relational grade for a given sequence. In our research, the FIM is applied to determine the similar grade between the codebook and the original image patterns. In spite of popular usage of Linde–Buzo–Grey (LBG) algorithm, the powerful evolutional PSO learning algorithm is taken to optimize the fuzzy inference system, which is used to extract appropriate codebooks for compressing several input testing grey-level images. The proposed FPSOVQ learning scheme compared with LBG based VQ learning method is presented to demonstrate its great result in several real image compression examples.  相似文献   

2.
一种基于小波变换的图像压缩方法   总被引:8,自引:0,他引:8  
提出一种基于小波变换的灰度图像数据压缩编码方法,基本思路是利用小波变换实现图像的多分辨分解,用矢量量化(VQ)对分解后的图像进行编码,在矢量量化LBG算法的初始码书的选取中根据矢量中各分量的特性提出一种改进的随机选取法,避免了可能的胞腔不均现象,提高了码书的质量,而且重构的图像质量也有所提高。  相似文献   

3.
《Pattern recognition letters》2001,22(3-4):373-379
Vector quantization (VQ) is a well-known data compression technique. In the codebook design phase as well as the encoding phase, given a block represented as a vector, searching the closest codeword in the codebook is a time-consuming task. Based on the mean pyramid structure and the range search approach, an improved search algorithm for VQ is presented in this paper. Conceptually, the proposed algorithm has the bandpass filter effect. Each time, using the derived formula, the search range becomes narrower due to the elimination of some portion of the previous search range. This reduces search times and improves the previous result by Lee and Chen (A fast search algorithm for vector quantization using mean pyramids of codewords. IEEE Trans. Commun. 43(2/3/4), (1995) 1697–1702). Some experimental results demonstrate the computational advantage of the proposed algorithm.  相似文献   

4.

Vector quantization (VQ) is a very effective way to save bandwidth and storage for speech coding and image coding. Traditional vector quantization methods can be divided into mainly seven types, tree-structured VQ, direct sum VQ, Cartesian product VQ, lattice VQ, classified VQ, feedback VQ, and fuzzy VQ, according to their codebook generation procedures. Over the past decade, quantization-based approximate nearest neighbor (ANN) search has been developing very fast and many methods have emerged for searching images with binary codes in the memory for large-scale datasets. Their most impressive characteristics are the use of multiple codebooks. This leads to the appearance of two kinds of codebook: the linear combination codebook and the joint codebook. This may be a trend for the future. However, these methods are just finding a balance among speed, accuracy, and memory consumption for ANN search, and sometimes one of these three suffers. So, finding a vector quantization method that can strike a balance between speed and accuracy and consume moderately sized memory, is still a problem requiring study.

  相似文献   

5.
The vector quantization (VQ) was a powerful technique in the applications of digital image compression. The traditionally widely used method such as the Linde–Buzo–Gray (LBG) algorithm always generated local optimal codebook. Recently, particle swarm optimization (PSO) is adapted to obtain the near-global optimal codebook of vector quantization. An alternative method, called the quantum particle swarm optimization (QPSO) had been developed to improve the results of original PSO algorithm. In this paper, we applied a new swarm algorithm, honey bee mating optimization, to construct the codebook of vector quantization. The results were compared with the other three methods that are LBG, PSO–LBG and QPSO–LBG algorithms. Experimental results showed that the proposed HBMO–LBG algorithm is more reliable and the reconstructed images get higher quality than those generated from the other three methods.  相似文献   

6.
自适应补偿矢量量化   总被引:2,自引:0,他引:2  
提出了一种基于LBG码书设计的新的图像矢量量化算法。该算法利用图像信号在正交矢量空间中的能量集中性,有效地减小了码书的搜索范围,加快了矢量量化速度。同时利用原始图像和重建图像之间的差值进行了自适应补偿,在保证较高压缩比的同时,有效地克服了矢量量化的致命缺陷,即重建图像存在严重的方块效应。  相似文献   

7.
We address the problem of speech compression at very low rates, with the short-term spectrum compressed to less than 20 bits per frame. Current techniques apply structured vector quantization (VQ) to the short-term synthesis filter coefficients to achieve rates of the order of 24 to 26 bits per frame. In this paper we show that temporal correlations in the VQ index stream can be introduced by dynamic codebook ordering, and that these correlations can be exploited by lossless coding approaches to reduce the number of bits per frame of the VQ scheme. The use of lossless coding ensures that no additional distortion is introduced, unlike other interframe techniques. We then detail two constructive algorithms which are able to exploit this redundancy. The first method is a delayed-decision approach, which dynamically adapts the VQ codebook to allow for efficient entropy coding of the index stream. The second is based on a vector subcodebook approach and does not incur any additional delay. Experimental results are presented for both methods to validate the approach.  相似文献   

8.
矢量量化的初始码书算法   总被引:2,自引:0,他引:2       下载免费PDF全文
矢量量化的初始码书设计是很重要的,影响或决定着其后码书形成算法的迭代次数和最终的码书质量。针对原有的初始码书算法在性能上随机性强与信源匹配程度不高的问题,提出一种对于训练矢量实施基于分量的和值排序,然后做分离平均的初始码书形成算法。算法使用了矢量的特征量,脱离了对于图像结构因数的依赖,能产生鲁棒性较好的初始码书。实验证明了该方法的有效性,与LBG算法结合可进一步提高码书质量。  相似文献   

9.
In this paper, fuzzy possibilistic c-means (FPCM) approach based on penalized and compensated constraints are proposed to vector quantization (VQ) in discrete cosine transform (DCT) for image compression. These approaches are named penalized fuzzy possibilistic c-means (PFPCM) and compensated fuzzy possibilistic c-means (CFPCM). The main purpose is to modify the FPCM strategy with penalized or compensated constraints so that the cluster centroids can be updated with penalized or compensated terms iteratively in order to find near-global solution in optimal problem. The information transformed by DCT was separated into DC and AC coefficients. Then, the AC coefficients are trained by using the proposed methods to generate better codebook based on VQ. The compression performances using the proposed approaches are compared with FPCM and conventional VQ method. From the experimental results, the promising performances can be obtained using the proposed approaches.  相似文献   

10.
《Parallel Computing》2002,28(7-8):1079-1093
Vector quantization (VQ) is a widely used algorithm in speech and image data compression. One of the problems of the VQ methodology is that it requires large computation time especially for large codebook size. This paper addresses two issues. The first deals with the parallel construction of the VQ codebook which can drastically reduce the training time. A master/worker parallel implementation of a VQ algorithm is proposed. The algorithm is executed on the DM-MIMD Alex AVX-2 machine using a pipeline architecture. The second issue deals with the ability of accurately predicting the machine performance. Using communication and computation models, a comparison between expected and real performance is carried out. Results show that the two models can accurately predict the performance of the machine for image data compression. Analysis of metrics normally used in parallel realization is conducted.  相似文献   

11.
为解决采用矢量量化的方法进行说话人识别时出现的失真问题,根据汉语语音的发音特性,提出了将矢量量化与语音特征的聚类技术相结合的方法,在进行矢量量化码书训练之前,先对特征矢量进行聚类筛选。实验结果表明,当测试语音片段长度为4 s时,在保持95%左右识别率下,采用普通矢量量化方法需64码本数,而采用该文方法只需8码本数,降低了8倍。结果说明该方法不但在一定程度上解决了因训练样本不足而引起的失真问题,而且通过方法的改进,实现了采用较低码字数产生较好的识别结果,从而提高识别效率。  相似文献   

12.
Recently, medical image compression becomes essential to effectively handle large amounts of medical data for storage and communication purposes. Vector quantization (VQ) is a popular image compression technique, and the commonly used VQ model is Linde–Buzo–Gray (LBG) that constructs a local optimal codebook to compress images. The codebook construction was considered as an optimization problem, and a bioinspired algorithm was employed to solve it. This article proposed a VQ codebook construction approach called the L2‐LBG method utilizing the Lion optimization algorithm (LOA) and Lempel Ziv Markov chain Algorithm (LZMA). Once LOA constructed the codebook, LZMA was applied to compress the index table and further increase the compression performance of the LOA. A set of experimentation has been carried out using the benchmark medical images, and a comparative analysis was conducted with Cuckoo Search‐based LBG (CS‐LBG), Firefly‐based LBG (FF‐LBG) and JPEG2000. The compression efficiency of the presented model was validated in terms of compression ratio (CR), compression factor (CF), bit rate, and peak signal to noise ratio (PSNR). The proposed L2‐LBG method obtained a higher CR of 0.3425375 and PSNR value of 52.62459 compared to CS‐LBG, FA‐LBG, and JPEG2000 methods. The experimental values revealed that the L2‐LBG process yielded effective compression performance with a better‐quality reconstructed image.  相似文献   

13.
Traditional LBG algorithm is a pure iterative optimization procedure to achieve the vector quantization (VQ) codebook, where an initial codebook is continually refined at every iteration to reduce the distortion between code-vectors and a given training data set. However, such interactive type learning algorithms will easily direct final results converging toward the local optimization while the high quality of the initial codebook is not available. In this article, an efficient heuristic-based learning method, called novel particle swarm optimization (NPSO), is proposed to design the proper codebook of VQ scheme that can develop the image compression system. To improve the performance of the basic PSO, the centroid updating machine applies the one step-size gradient descent learning step in the heuristic learning procedure. Additionally, the presented NPSO with advantages of the centroid updating machine is proposed to quickly achieve the near-optimal reconstructive image. For demonstrating the proposed NPSO learning scheme, the image with several horizontal grey bars is first applied to present the efficiency of the NPSO learning mechanism. LBG and NPSO learning methods are also applied to test the reconstructing performance in several type images “Lena,” “Airplane,” “Cameraman”, and “peppers.” In our experiments, the NPSO learning algorithm provides the higher performance than conventional LBG methods in the application of building image compression system.  相似文献   

14.
提出一种用于矢量量化压缩图像的安全数据隐藏方案.为降低数据嵌入引入的失真,以码字间的矢量均方差为优化指标,采用遗传算法实现码本的优化分割,并提出基于码本分割的数据嵌入算法.采用基于自适应算术熵解码的数据映射方法,实现了嵌入前后统计特性的保持.实验结果表明,所提出的算法在容量、失真水平和安全性方面具有较好的综合性能.  相似文献   

15.
In this paper an adaptive hierarchical algorithm of vector quantization for image coding is proposed. First the basic codebook is generated adaptively, then the codes are coded into higher-level codes by creating an index codebook using the redundance presented in the codes. This hierarchical scheme lowers the bit rate significantly and causes little more computation and no more distortion than the single-layer adaptive VQ algorithm does which is used to create the basic codebook.  相似文献   

16.
This paper presents a scheme and its Field Programmable Gate Array (FPGA) implementation for a system based on combining the bi-dimensional discrete wavelet transformation (2D-DWT) and vector quantization (VQ) for image compression. The 2D-DWT works in a non-separable fashion using a parallel filter structure with distributed control to compute two resolution levels. The wavelet coefficients of the higher frequency sub-bands are vector quantized using multi-resolution codebook and those of the lower frequency sub-band at level two are scalar quantized and entropy encoded. VQ is carried out by self organizing feature map (SOFM) neural nets working at the recall phase. Codebooks are quickly generated off-line using the same nets functioning at the training phase. The complete system, including the 2D-DWT, the multi-resolution codebook VQ, and the statistical encoder, was implemented on a Xilinx Virtex 4 FPGA and is capable of performing real-time compression for digital video when dealing with grayscale 512 × 512 pixels images. It offers high compression quality (PSNR values around 35 dB) and acceptable compression rate values (0.62 bpp).
Javier Diaz-CarmonaEmail:
  相似文献   

17.
在查阅和分析多级矢量量化和模拟退火技术有关文献资料的基础上,阐述了矢量量化最优码书的形成条件,并以多级矢量量化和模拟退火技术为基础,提出了一种基于模拟退火技术的多级矢量量化编码方案,该方案充分弥补了多级矢量量化和模拟退火技术在图象编码中应用的各自不足,并且发挥了多级矢量量化和模拟火技术在图象编码中应用的各自优点,理论和实验都证明,该算法不仅能 码书存储量,而且图象恢复效果较好。  相似文献   

18.
陈妍  李凤霞 《计算机工程与应用》2006,42(22):177-178,213
矢量量化因其压缩比大,解码简单效率高成为实时绘制系统中大规模纹理压缩主要使用的方法。文献[1]介绍的基于矢量量化方法针对纹理金字塔压缩取得了很大的成功。在此基础上使用一个特征值数据本,一个残差矢量码本和索引表对金字塔纹理的各层进行压缩,在保持实时性的同时提高了压缩比。使用了改进的SOFM码本生成算法,实验表明其码本质量优于传统的码本生成算法。  相似文献   

19.
This paper proposes two co-adaptation schemes of self-organizing maps that incorporate the Kohonen's learning into the GA evolution in an attempt to find an optimal vector quantization codebook of images. The Kohonen's learning rule used for vector quantization of images is sensitive to the choice of its initial parameters and the resultant codebook does not guarantee a minimum distortion. To tackle these problems, we co-adapt the codebooks by evolution and learning in a way that the evolution performs the global search and makes inter-codebook adjustments by altering the codebook structures while the learning performs the local search and makes intra-codebook adjustments by making each codebook's distortion small. Two kinds of co-adaptation schemes such as Lamarckian and Baldwin co-adaptation are considered in our work. Simulation results show that the evolution guided by a local learning provides the fast convergence, the co-adapted codebook produces better reconstruction image quality than the non-learned equivalent, and Lamarckian co-adaptation turns out more appropriate for the VQ problem.  相似文献   

20.
矢量量化是图像压缩的重要方法。论文提出了基于Hopfield神经网络的图像矢量量化方法,该方法首先构造聚类表格;然后聚类表格按离散Hopfield神经网络串行方式运行;最后根据得到的最终码字集,对图像进行矢量量化。论文最后给出模拟实验和结果比较,结果表明该方法是有效的,生成的码本质量优于传统的LBG算法。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号