首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
一种基于小波变换的图像压缩方法   总被引:8,自引:0,他引:8  
提出一种基于小波变换的灰度图像数据压缩编码方法,基本思路是利用小波变换实现图像的多分辨分解,用矢量量化(VQ)对分解后的图像进行编码,在矢量量化LBG算法的初始码书的选取中根据矢量中各分量的特性提出一种改进的随机选取法,避免了可能的胞腔不均现象,提高了码书的质量,而且重构的图像质量也有所提高。  相似文献   

2.
This article develops an evolutional fuzzy particle swarm optimization (FPSO) learning algorithm to self extract the near optimum codebook of vector quantization (VQ) for carrying on image compression. The fuzzy particle swarm optimization vector quantization (FPSOVQ) learning schemes, combined advantages of the adaptive fuzzy inference method (FIM), the simple VQ concept and the efficient particle swarm optimization (PSO), are considered at the same time to automatically create near optimum codebook to achieve the application of image compression. The FIM is known as a soft decision to measure the relational grade for a given sequence. In our research, the FIM is applied to determine the similar grade between the codebook and the original image patterns. In spite of popular usage of Linde–Buzo–Grey (LBG) algorithm, the powerful evolutional PSO learning algorithm is taken to optimize the fuzzy inference system, which is used to extract appropriate codebooks for compressing several input testing grey-level images. The proposed FPSOVQ learning scheme compared with LBG based VQ learning method is presented to demonstrate its great result in several real image compression examples.  相似文献   

3.
Traditional LBG algorithm is a pure iterative optimization procedure to achieve the vector quantization (VQ) codebook, where an initial codebook is continually refined at every iteration to reduce the distortion between code-vectors and a given training data set. However, such interactive type learning algorithms will easily direct final results converging toward the local optimization while the high quality of the initial codebook is not available. In this article, an efficient heuristic-based learning method, called novel particle swarm optimization (NPSO), is proposed to design the proper codebook of VQ scheme that can develop the image compression system. To improve the performance of the basic PSO, the centroid updating machine applies the one step-size gradient descent learning step in the heuristic learning procedure. Additionally, the presented NPSO with advantages of the centroid updating machine is proposed to quickly achieve the near-optimal reconstructive image. For demonstrating the proposed NPSO learning scheme, the image with several horizontal grey bars is first applied to present the efficiency of the NPSO learning mechanism. LBG and NPSO learning methods are also applied to test the reconstructing performance in several type images “Lena,” “Airplane,” “Cameraman”, and “peppers.” In our experiments, the NPSO learning algorithm provides the higher performance than conventional LBG methods in the application of building image compression system.  相似文献   

4.
The vector quantization (VQ) was a powerful technique in the applications of digital image compression. The traditionally widely used method such as the Linde–Buzo–Gray (LBG) algorithm always generated local optimal codebook. Recently, particle swarm optimization (PSO) is adapted to obtain the near-global optimal codebook of vector quantization. An alternative method, called the quantum particle swarm optimization (QPSO) had been developed to improve the results of original PSO algorithm. In this paper, we applied a new swarm algorithm, honey bee mating optimization, to construct the codebook of vector quantization. The results were compared with the other three methods that are LBG, PSO–LBG and QPSO–LBG algorithms. Experimental results showed that the proposed HBMO–LBG algorithm is more reliable and the reconstructed images get higher quality than those generated from the other three methods.  相似文献   

5.
对典型的竞争学习算法进行了研究和分析,提出了一种基于神经元获胜概率的概率敏感竞争虎法。与传统竞争学习算法只有一个神经元获胜而得到学习不同,PSCL算法按照各种凶的获胜概率并通过对失真距离的调整使每个神经元均得到不同的学习,可以有效地克服神经元欠利用问题。  相似文献   

6.
A GPU implementation for LBG and SOM training   总被引:1,自引:1,他引:0  
Vector quantization (VQ) is an effective technique applicable in a wide range of areas, such as image compression and pattern recognition. The most time-consuming procedure of VQ is codebook training, and two of the frequently used training algorithms are LBG and self-organizing map (SOM). Nowadays, desktop computers are usually equipped with programmable graphics processing units (GPUs), whose parallel data-processing ability is ideal for codebook training acceleration. Although there are some GPU algorithms for LBG training, their implementations suffer from a large amount of data transfer between CPU and GPU and a large number of rendering passes within a training iteration. This paper presents a novel GPU-based training implementation for LBG and SOM training. More specifically, we utilize the random write ability of vertex shader to reduce the overheads mentioned above. Our experimental results show that our approach can run four times faster than the previous approach.  相似文献   

7.
小波树结构快速矢量量化编码方法   总被引:3,自引:0,他引:3       下载免费PDF全文
提出了基于人眼视觉属性和应用小波树结构2快速图象编码的矢量量化图象编码方法,简称为树结构快速矢量量化编码。在分析此方法矢量量化特点之后,设计产生码本的统计方法,并提出了矢量量化编码的快速算法。  相似文献   

8.
矢量量化是一种有效的数据压缩技术,由于其算法简单,具有较高的压缩率,因而被广泛应用于数据压缩编码领域。通过对图像块灰度特征的研究,根据图像的平滑与否,提出了对图像进行均值和矢量量化复合编码算法,该算法对平滑图像块采用均值编码,对非平滑块采用矢量量化编码。这不仅节省了平滑码字的存储空间,提高了码书存储效率,并且编码速度大大提高。同时采用码字旋转反色(2R)压缩算法将码书的存储容量减少到1/8,并结合最近邻块扩展搜索算法(EBNNS)对搜索算法进行优化。在保证图像画质的前提下,整个系统的图像编码速度比全搜索的普通矢量量化平均提高约7.7倍。  相似文献   

9.
基于FPGA+ARM的高速计算机屏幕信息记录系统   总被引:1,自引:0,他引:1  
介绍一种自主研发的高速计算机屏幕信息记录系统.该系统支持VGA/DVI输入,支持SVGA、XGA、SXGA、UXGA等多种计算机屏幕分辨率图像的连续压缩和存储.实验表明,本系统的单帧图像压缩性能接近JPEG2000标准,PSNR值优于JPEG标准.  相似文献   

10.
基于稳健统计的矢量量化器设计算法   总被引:1,自引:0,他引:1       下载免费PDF全文
L BG算法作为矢量量化的基本算法具有经典意义 ,但由于在训练图象中 ,总存在少量的离群矢量 ,使得在训练码书时 ,码字的分布受到影响 ,进而使得压缩性能下降 ,因而不能充分体现出矢量量化的优越性能 .而运用基于稳健统计的方法来设计矢量量化器 ,由于减少了码书中的离群矢量 ,同时加强了中心矢量在码书中的权重 ,因而不仅能够尽量减少码书的冗余 ,而且能大幅度提高压缩性能 .实验结果显示 ,用基于稳健统计的设计方法设计的码书 ,其压缩性能比传统的 L BG算法有了较大的改善 ,且恢复图象的主观、客观效果都是令人满意的 .  相似文献   

11.
许允喜  俞一彪 《计算机应用》2008,28(2):339-341,
矢量量化(VQ)方法是文本无关说话人识别中广泛应用的建模方法之一,它的主要问题是码本设计问题。语音特征参数是高维数据,样本分布复杂,因此码本设计的难度也很大,传统的LBG算法只能获得局部最优的码本。提出一种VQ码本设计的新方法,将小生境技术与K-均值算法融入到免疫算法训练过程中,形成混合免疫算法,采用针对高维数据聚类的改进变异算子,降低了随机变异的盲目性,增强群体的全局及局部搜索能力,同时通过接种疫苗提高算法的收敛速度。说话人识别实验表明,与传统LBG和基于混合遗传算法的VQ码本设计方法相比,该方法可以得到更优的模型参数,使得系统的识别率进一步提高。  相似文献   

12.
采用遗传算法的文本无关说话人识别   总被引:1,自引:0,他引:1  
为解决在说话人识别方法的矢量量化(Vector Quantization,VQ)系统中,K-均值法的码本设计很容易陷入局部最优,而且初始码本的选取对最佳码本设计影响很大的问题,将遗传算法(Genetic Algorithm,GA)与基于非参数模型的VQ相结合,得到1种VQ码本设计的GA-K算法.该算法利用GA的全局优化能力得到最优的VQ码本,避免LBG算法极易收敛于局部最优点的问题;通过GA自身参数,结合K-均值法收敛速度快的优点,搜索出训练矢量空间中全局最优的码本.实验结果表明,GA-K算法优于LBG算法,可以很好地协调收敛性和识别率之间的关系.  相似文献   

13.
An important task of speaker verification is to generate speaker specific models and match an input speaker’s utterance with these models. This paper focuses on comparing the performance of text dependent speaker verification system using Mel Frequency Cepstral Coefficients feature and different Vector Quantization (VQ) based speaker modelling techniques to generate the speaker specific models. Speaker-specific information is mainly represented by spectral features and using these features we have developed the model which serves as an important entity for determining the claimed identity of the speaker. In the modelling part, we used Linde, Buzo, Gray (LBG) VQ, proposed adaptive LBG VQ and Fuzzy C Means (FCM) VQ for generating speaker specific model. The experimental results that are performed on microphonic database shows that accuracy significantly depends on the size of the codebook in all VQ techniques, and on FCM VQ accuracy also depend on the value of learning parameter of the objective function. Experiment results shows that how the accuracy of speaker verification system is depend on different representations of the codebook, different size of codebook in VQ modelling techniques and learning parameter in FCM VQ.  相似文献   

14.
针对基于主分量分析和遗传算法的码书设计算法中当码书大小超过64时码书性能下降的问题,提出了一种改进的码书设计算法.首先采用主分量分析对训练矢量降维以减少计算复杂度,然后利用遗传算法的全局优化能力计算得到接近全局最优的码书.实验结果表明,与原算法和经典的LBG算法相比,文中算法所生成的码书性能有了明显提高,而且计算时间也少于LBG算法.  相似文献   

15.
Devices using single sensors to capture colour images are cheaper due to high cost of Charge Couple Device (CCD) sensors or Complementary Metal-Oxide Semiconductor (CMOS) sensors. Single sensor devices use Colour Filter Array (CFA) to sample one colour band at every pixel location. Demosaicking process is applied to interpolate the two missing colours from the surrounding. Typically compression is done on the demosaicked images which may not be efficient due to the individual compression of the different colour space. This work investigated compression of raw data before demosaicking and performs demosaicking to reconstruct the R, G, B bands later. A novel Vector Quantization (VQ) technique for encoding the wavelet decomposed image using Modified Artificial Bee Colony (ABC) optimization algorithm is proposed. The proposed technique is compared with Genetic Algorithm based VQ and ABC based quantization and with standard LBG and Lloyd algorithm. Results show higher Peak Signal-to-Noise Ratio (PSNR) indicating better reconstruction.  相似文献   

16.
The basic goal of medical image compression is to reduce the bit rate and enhance the compression efficiency for the transmission and storage of the medical imagery while maintaining an acceptable diagnostic image quality. Because of the storage, transmission bandwidth, picture archiving and communication constraints and the limitations of the conventional compression methods, the medical imagery need to be compressed selectively to reduce the transmission time and storage cost along with the preservance of the high diagnostic quality. The other important reason of context based medical image compression is the high spatial resolution and contrast sensitivity requirements. In medical images, contextual region is an area which contains the most useful and important information and must be coded carefully without appreciable distortion. A novel scheme for context based coding is proposed here and yields significantly better compression rates than the general methods of JPEG and JPEG2K. In this proposed method the contextual part of the image is encoded selectively on the high priority basis with a very low compression rate (high bpp) and the background of the image is separately encoded with a low priority and a high compression rate (low bpp) and they are re-combined for the reconstruction of the image. As a result, high over all compression rates, better diagnostic image quality and improved performance parameters (CR, MSE, PSNR and CoC) are obtained. The experimental results have been compared to the scaling, Maxshift, implicit and EBCOT methods on ultrasound medical images and it is found that the proposed algorithm gives better and improved results.  相似文献   

17.
刘伟  杨圣 《测控技术》2006,25(5):30-32
提出了一种基于JPEG/JPEG2000相结合的医学图像感兴趣区域压缩方法.该方法对在人为选定医学图像的感兴趣区域采用无损的JPEG2000压缩,而对其他图像区域则采用高压缩比的JPEG压缩,从而较好地解决了医学图像的高压缩比和高质量之间的矛盾.通过对一副人脑MRI医学图像的压缩实验,得到了压缩比12:1,并且病灶区图像信息完整的压缩图像.  相似文献   

18.
针对LBG算法初始码本随机选取后易出现空胞腔、易陷入局部极小、迭代次数大等缺陷,本文依据模糊聚类理论引入了矢量量化码本设计训练的模糊聚类与LBG级联算法:先用模糊聚类算法训练码本,将训练得到的码本作为传统LBG算法的初始码本,再用传统LBG算法训练.论述了模糊聚类和LBG联合算法的原理与方法;用该算法分剐训练了语音线性...  相似文献   

19.
基于时域变换的失真度可调图像压缩算法   总被引:4,自引:1,他引:4  
颜彬  陈传波 《计算机应用》2002,22(11):14-17
提出一种基于时域变换的失真度可调图像压缩算法,在象素合并过程中,利用一个最大均方差阈值和一个粒度差值控制象素节点和结构节点的产生,来生成四叉树结构及象素数据表,通过调整阈值和差值的具体取值(0-40)来实现不同的压缩比(从无损压缩连续变化到大比例有损压缩),压缩后的图像在相同PSNR情况下比JPEG压缩占有更少的空间,在相同压缩比的情况下图像质量高于用JPEG压缩的图像,该算法还有良好的时间复杂性O(N/3)和空间复杂性O(N),可用于任意大小图像的压缩与传输。  相似文献   

20.
Recently, vector quantization (VQ) has received considerable attention, and has become an effective tool for image compression. It provides a high compression ratio and a simple decoding process. However, studies on the practical implementation of VQ have revealed some major difficulties such as edge integrity and codebook design efficiency. After reviewing the state-of-the-art in the field of vector quantization, we focus on iterative and non-iterative codebook generation algorithms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号