首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This article develops an evolutional fuzzy particle swarm optimization (FPSO) learning algorithm to self extract the near optimum codebook of vector quantization (VQ) for carrying on image compression. The fuzzy particle swarm optimization vector quantization (FPSOVQ) learning schemes, combined advantages of the adaptive fuzzy inference method (FIM), the simple VQ concept and the efficient particle swarm optimization (PSO), are considered at the same time to automatically create near optimum codebook to achieve the application of image compression. The FIM is known as a soft decision to measure the relational grade for a given sequence. In our research, the FIM is applied to determine the similar grade between the codebook and the original image patterns. In spite of popular usage of Linde–Buzo–Grey (LBG) algorithm, the powerful evolutional PSO learning algorithm is taken to optimize the fuzzy inference system, which is used to extract appropriate codebooks for compressing several input testing grey-level images. The proposed FPSOVQ learning scheme compared with LBG based VQ learning method is presented to demonstrate its great result in several real image compression examples.  相似文献   

2.
Recently, medical image compression becomes essential to effectively handle large amounts of medical data for storage and communication purposes. Vector quantization (VQ) is a popular image compression technique, and the commonly used VQ model is Linde–Buzo–Gray (LBG) that constructs a local optimal codebook to compress images. The codebook construction was considered as an optimization problem, and a bioinspired algorithm was employed to solve it. This article proposed a VQ codebook construction approach called the L2‐LBG method utilizing the Lion optimization algorithm (LOA) and Lempel Ziv Markov chain Algorithm (LZMA). Once LOA constructed the codebook, LZMA was applied to compress the index table and further increase the compression performance of the LOA. A set of experimentation has been carried out using the benchmark medical images, and a comparative analysis was conducted with Cuckoo Search‐based LBG (CS‐LBG), Firefly‐based LBG (FF‐LBG) and JPEG2000. The compression efficiency of the presented model was validated in terms of compression ratio (CR), compression factor (CF), bit rate, and peak signal to noise ratio (PSNR). The proposed L2‐LBG method obtained a higher CR of 0.3425375 and PSNR value of 52.62459 compared to CS‐LBG, FA‐LBG, and JPEG2000 methods. The experimental values revealed that the L2‐LBG process yielded effective compression performance with a better‐quality reconstructed image.  相似文献   

3.
The vector quantization (VQ) was a powerful technique in the applications of digital image compression. The traditionally widely used method such as the Linde–Buzo–Gray (LBG) algorithm always generated local optimal codebook. Recently, particle swarm optimization (PSO) is adapted to obtain the near-global optimal codebook of vector quantization. An alternative method, called the quantum particle swarm optimization (QPSO) had been developed to improve the results of original PSO algorithm. In this paper, we applied a new swarm algorithm, honey bee mating optimization, to construct the codebook of vector quantization. The results were compared with the other three methods that are LBG, PSO–LBG and QPSO–LBG algorithms. Experimental results showed that the proposed HBMO–LBG algorithm is more reliable and the reconstructed images get higher quality than those generated from the other three methods.  相似文献   

4.
一种基于小波变换的图像压缩方法   总被引:8,自引:0,他引:8  
提出一种基于小波变换的灰度图像数据压缩编码方法,基本思路是利用小波变换实现图像的多分辨分解,用矢量量化(VQ)对分解后的图像进行编码,在矢量量化LBG算法的初始码书的选取中根据矢量中各分量的特性提出一种改进的随机选取法,避免了可能的胞腔不均现象,提高了码书的质量,而且重构的图像质量也有所提高。  相似文献   

5.
《Parallel Computing》2002,28(7-8):1079-1093
Vector quantization (VQ) is a widely used algorithm in speech and image data compression. One of the problems of the VQ methodology is that it requires large computation time especially for large codebook size. This paper addresses two issues. The first deals with the parallel construction of the VQ codebook which can drastically reduce the training time. A master/worker parallel implementation of a VQ algorithm is proposed. The algorithm is executed on the DM-MIMD Alex AVX-2 machine using a pipeline architecture. The second issue deals with the ability of accurately predicting the machine performance. Using communication and computation models, a comparison between expected and real performance is carried out. Results show that the two models can accurately predict the performance of the machine for image data compression. Analysis of metrics normally used in parallel realization is conducted.  相似文献   

6.
采用遗传算法的文本无关说话人识别   总被引:1,自引:0,他引:1  
为解决在说话人识别方法的矢量量化(Vector Quantization,VQ)系统中,K-均值法的码本设计很容易陷入局部最优,而且初始码本的选取对最佳码本设计影响很大的问题,将遗传算法(Genetic Algorithm,GA)与基于非参数模型的VQ相结合,得到1种VQ码本设计的GA-K算法.该算法利用GA的全局优化能力得到最优的VQ码本,避免LBG算法极易收敛于局部最优点的问题;通过GA自身参数,结合K-均值法收敛速度快的优点,搜索出训练矢量空间中全局最优的码本.实验结果表明,GA-K算法优于LBG算法,可以很好地协调收敛性和识别率之间的关系.  相似文献   

7.

In this paper, we propose a reversible data hiding scheme that exploits the centroid formula. Specifically, we use it to define a centroid boundary vector and a centroid state codebook CSCB. Initially, our centroid boundary vectors and CSCBs are the same as the side match vector quantization (SMVQ) algorithm’s boundary vectors and state codebooks SCBs. For each VQ index, the proposed scheme exploits the centroid formula to update its centroid boundary vector and the corresponding CSCB. The updating is coupled with a heuristic to select the best state codebook (i.e., either SCB or CSCB) for each VQ index, which generates a highly compressible distribution of index values. Our experimental results show that the proposed scheme can embed n = 1, 2, 3, and 4 bit per index (bpi) at bit rates of 0.332, 0.394, 0.457, and 0.519 bit per pixel (bpp), respectively, for the main codebook size N = 256. These results confirm that the proposed scheme improves recent VQ and SMVQ based reversible data hiding schemes.

  相似文献   

8.
To enhance the traditional vector quantisation (VQ) system by adding the watermarking ability, a digital image watermarking scheme, which modifies the VQ indices to carry watermark bits, is presented. This scheme partitions the main codebook into two sub-codebooks by referring to the user-key. Then, for each input vector of the cover image, a sub-codebook is selected according to the watermark bit to be embedding. The traditional VQ coding procedure is then done using the sub-codebook for the vector. Furthermore, to improve the performance of the scheme, a genetic codebook partition (GCP) procedure, which employs the genetic algorithm (GA) to find a better way to split the codebook, is proposed. It is demonstrated that the proposed methods provide faster encoding time, better imperceptibility, stronger robustness under some common attacks, and easier implementation than some related VQ-based watermarking schemes proposed in the literature.  相似文献   

9.
Efficient vector quantization using genetic algorithm   总被引:1,自引:1,他引:0  
This paper proposes a new codebook generation algorithm for image data compression using a combined scheme of principal component analysis (PCA) and genetic algorithm (GA). The combined scheme makes full use of the near global optimal searching ability of GA and the computation complexity reduction of PCA to compute the codebook. The experimental results show that our algorithm outperforms the popular LBG algorithm in terms of computational efficiency and image compression performance.  相似文献   

10.
针对基于主分量分析和遗传算法的码书设计算法中当码书大小超过64时码书性能下降的问题,提出了一种改进的码书设计算法.首先采用主分量分析对训练矢量降维以减少计算复杂度,然后利用遗传算法的全局优化能力计算得到接近全局最优的码书.实验结果表明,与原算法和经典的LBG算法相比,文中算法所生成的码书性能有了明显提高,而且计算时间也少于LBG算法.  相似文献   

11.
An important task of speaker verification is to generate speaker specific models and match an input speaker’s utterance with these models. This paper focuses on comparing the performance of text dependent speaker verification system using Mel Frequency Cepstral Coefficients feature and different Vector Quantization (VQ) based speaker modelling techniques to generate the speaker specific models. Speaker-specific information is mainly represented by spectral features and using these features we have developed the model which serves as an important entity for determining the claimed identity of the speaker. In the modelling part, we used Linde, Buzo, Gray (LBG) VQ, proposed adaptive LBG VQ and Fuzzy C Means (FCM) VQ for generating speaker specific model. The experimental results that are performed on microphonic database shows that accuracy significantly depends on the size of the codebook in all VQ techniques, and on FCM VQ accuracy also depend on the value of learning parameter of the objective function. Experiment results shows that how the accuracy of speaker verification system is depend on different representations of the codebook, different size of codebook in VQ modelling techniques and learning parameter in FCM VQ.  相似文献   

12.
A GPU implementation for LBG and SOM training   总被引:1,自引:1,他引:0  
Vector quantization (VQ) is an effective technique applicable in a wide range of areas, such as image compression and pattern recognition. The most time-consuming procedure of VQ is codebook training, and two of the frequently used training algorithms are LBG and self-organizing map (SOM). Nowadays, desktop computers are usually equipped with programmable graphics processing units (GPUs), whose parallel data-processing ability is ideal for codebook training acceleration. Although there are some GPU algorithms for LBG training, their implementations suffer from a large amount of data transfer between CPU and GPU and a large number of rendering passes within a training iteration. This paper presents a novel GPU-based training implementation for LBG and SOM training. More specifically, we utilize the random write ability of vertex shader to reduce the overheads mentioned above. Our experimental results show that our approach can run four times faster than the previous approach.  相似文献   

13.
许允喜  俞一彪 《计算机应用》2008,28(2):339-341,
矢量量化(VQ)方法是文本无关说话人识别中广泛应用的建模方法之一,它的主要问题是码本设计问题。语音特征参数是高维数据,样本分布复杂,因此码本设计的难度也很大,传统的LBG算法只能获得局部最优的码本。提出一种VQ码本设计的新方法,将小生境技术与K-均值算法融入到免疫算法训练过程中,形成混合免疫算法,采用针对高维数据聚类的改进变异算子,降低了随机变异的盲目性,增强群体的全局及局部搜索能力,同时通过接种疫苗提高算法的收敛速度。说话人识别实验表明,与传统LBG和基于混合遗传算法的VQ码本设计方法相比,该方法可以得到更优的模型参数,使得系统的识别率进一步提高。  相似文献   

14.
This paper presents a novel data hiding scheme for VQ compression images. This scheme first uses SMVQ prediction to classify encoding blocks into different types, then uses different codebooks and encoding strategies to perform encoding and data hiding simultaneously. In using SMVQ prediction, no extra data is required to identify the combination of encoding strategies and codebook, which helps improve compression performance. Furthermore, the proposed scheme adaptively combines VQ and SMVQ encoding characteristics to provide higher image quality of stego-images while size of the hidden payload remains the same. Experimental results show that the proposed scheme indeed outperforms other previously proposed schemes in image quality of the stego-images and compression performance.  相似文献   

15.
对典型的竞争学习算法进行了研究和分析,提出了一种基于神经元获胜概率的概率敏感竞争虎法。与传统竞争学习算法只有一个神经元获胜而得到学习不同,PSCL算法按照各种凶的获胜概率并通过对失真距离的调整使每个神经元均得到不同的学习,可以有效地克服神经元欠利用问题。  相似文献   

16.
矢量量化是一种有效的数据压缩技术,由于其算法简单,具有较高的压缩率,因而被广泛应用于数据压缩编码领域。通过对图像块灰度特征的研究,根据图像的平滑与否,提出了对图像进行均值和矢量量化复合编码算法,该算法对平滑图像块采用均值编码,对非平滑块采用矢量量化编码。这不仅节省了平滑码字的存储空间,提高了码书存储效率,并且编码速度大大提高。同时采用码字旋转反色(2R)压缩算法将码书的存储容量减少到1/8,并结合最近邻块扩展搜索算法(EBNNS)对搜索算法进行优化。在保证图像画质的前提下,整个系统的图像编码速度比全搜索的普通矢量量化平均提高约7.7倍。  相似文献   

17.
一种高效的基于模拟退火的LBG算法   总被引:7,自引:0,他引:7  
针对传统矢量量化码书设计LBG算法对初始码书敏感和在迭代过程中容易陷入局部极小的缺陷,结合模拟退火算法,提出了一种基于模拟退火的LBG改进算法,并给出了退火过程中的扰动因子刘画、扰动策略选取、稳定性判据确定和温度下降策略等细节.模拟实验结果表明,本文所提出的改进算法能够有效地回避对初始码书的敏感,同时在搜索性能和图像压缩后还原质量上都得到很好的改善.  相似文献   

18.
矢量量化的初始码书算法   总被引:2,自引:0,他引:2       下载免费PDF全文
矢量量化的初始码书设计是很重要的,影响或决定着其后码书形成算法的迭代次数和最终的码书质量。针对原有的初始码书算法在性能上随机性强与信源匹配程度不高的问题,提出一种对于训练矢量实施基于分量的和值排序,然后做分离平均的初始码书形成算法。算法使用了矢量的特征量,脱离了对于图像结构因数的依赖,能产生鲁棒性较好的初始码书。实验证明了该方法的有效性,与LBG算法结合可进一步提高码书质量。  相似文献   

19.
小波树结构快速矢量量化编码方法   总被引:3,自引:0,他引:3       下载免费PDF全文
提出了基于人眼视觉属性和应用小波树结构2快速图象编码的矢量量化图象编码方法,简称为树结构快速矢量量化编码。在分析此方法矢量量化特点之后,设计产生码本的统计方法,并提出了矢量量化编码的快速算法。  相似文献   

20.
An image compression technique is proposed that attempts to achieve both robustness to transmission bit errors common to wireless image communication, as well as sufficient visual quality of the reconstructed images. Error robustness is achieved by using biorthogonal wavelet subband image coding with multistage gain-shape vector quantization (MS-GS VQ) which uses three stages of signal decomposition in an attempt to reduce the effect of transmission bit errors by distributing image information among many blocks. Good visual quality of the reconstructed images is obtained by applying genetic algorithms (GAs) to codebook generation to produce reconstruction capabilities that are superior to the conventional techniques. The proposed decomposition scheme also supports the use of GAs because decomposition reduces the problem size. Some simulations for evaluating the performance of the proposed coding scheme on both transmission bit errors and distortions of the reconstructed images are performed. Simulation results show that the proposed MS-GS VQ with good codebooks designed by GAs provides not only better robustness to transmission bit errors but also higher peak signal-to-noise ratio even under high bit error rate conditions  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号