首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 109 毫秒
1.
高效的模糊聚类初始码书生成算法   总被引:2,自引:0,他引:2  
码书设计在矢量量化中至关重要,而多数码书设计算法都是基于初始码书的.从经典的LBG算法的缺陷出发,提出一种基于模糊聚类的高效初始码书生成算法,通过将初始码书的码矢在输入矢量空间中很好地散开,并尽可能占据输入概率密度较大的区域,从而使之后的LBG算法避免陷入局部最优,设计出的码书性能更好,更加接近全局最优,同时加快了收敛速度,减少了迭代次数.将该算法应用于图像编码的实验中,结果表明:该算法能够从效率和质量两方面有效地提高矢量量化的性能.  相似文献   

2.
李霆  王东进  刘发林 《电讯技术》2007,47(1):151-153
将遗传算法与LBG算法相结合,得到了一种矢量量化码书设计算法.利用遗传算法的全局优化能力得到最优的矢量量化码书;同时,克服了传统遗传算法收敛速度慢的缺点.实验结果表明,文中提出的算法性能上优于LBG算法,且收敛速度较快.  相似文献   

3.
粒子对算法在图像矢量量化中的应用   总被引:8,自引:0,他引:8       下载免费PDF全文
纪震  廖惠连  许文焕  姜来 《电子学报》2007,35(10):1916-1920
本文给出了一种新的图像矢量量化码书的优化设计方法——粒子对算法.在传统粒子群优化(Particle Swarm Optimization,PSO)算法的基础上,用两个粒子构成了群体规模较小的粒子对,在码书空间中搜索最佳码书.在每次迭代运算中,粒子对按先后顺序执行PSO算法中的速度更新、位置更新操作和标准LBG算法,并用误差较大的训练矢量代替越界的码字.此算法避免粒子陷入局部最优码书,较准确地记录和估计每个码字的最佳移动方向和历史路径,在训练矢量密集区域和稀疏区域合理地分配码字,从而使整体码书向全局最优解靠近.实验结果表明,本算法始终稳定地取得显著优于FKM、FRLVQ、FRLVQ-FVQ算法的性能,较好地解决了矢量量化中初始码书影响优化结果的问题,且在计算时间和收敛速度方面有相当的优势.  相似文献   

4.
针对在LBG算法中存在初始码书的选择极易影响码书训练的收敛速度和最终码书性能的缺陷,提出了一种基于微粒群的矢量量化码书设计算法.首先产生具有一定全局性特点的初始码书,然后再应用LBG算法进行优化得到同时具有局部特性的码书.实验结果验证了该算法的合理性.  相似文献   

5.
李殷  李飞 《电视技术》2012,36(17):26-29
鉴于经典的LBG码书设计算法易陷入局部最优解,将量子粒子群优化算法应用到图像矢量量化码书设计中,提出一种基于量子粒子群的矢量量化码书设计算法(QPSO-VQ)。在该算法中,用粒子表示码书,用峰值信噪比(PSNR)作为算法的适应度函数,通过量子粒子群算法的更新公式来更新码书。实验结果表明,与经典的LBG码书设计算法和粒子群矢量量化码书设计算法相比,QPSO-VQ在解码图像的PSNR值和算法的稳定度等方面有比较明显的优势,可以获得性能较好的码书。  相似文献   

6.
语音识别技术已在通信及控制等领域得到广泛应用,针对孤立词语音识别矢量量化中LBG算法对初始码书选择敏感,容易陷入局部最优、泛化能力不强的缺点,将免疫粒子群优化算法(IPSO)和LBG算法结合进行聚类分析,从而得到基于IPSO-LBG的码书设计方法,并将其用于基于离散隐马尔可夫模型(DHMM)的孤立词语音识别系统中。通过实验,与传统LBG算法的DHMM孤立词语音识别系统的识别结果相比,证明了改进的系统有较好的识别率和适应性。  相似文献   

7.
LSF(线谱频率)码书的性能对合成语音质量有着重要影响.经典的LBG算法容易陷入局部最优,而目前的一些码书进化算法搜索空间较大、搜索效率不明显.本文提出了一种新型的基于对LSF矢量空间进行拉伸变化的混合进化码书优化算法.该算法编码空间与矢量同维,相对较小,便于优化操作.算法中引入EP中的变异操作对PSO位置、速度矢量进行控制,以提高优化搜索算法的效率.实验结果表明,本文算法有效地改善了码书性能.  相似文献   

8.
基于人工蚁群优化的矢量量化码书设计算法   总被引:10,自引:2,他引:10       下载免费PDF全文
李霞  罗雪晖  张基宏 《电子学报》2004,32(7):1082-1085
本文提出一种基于人工蚁群优化的矢量量化码书设计新算法.该算法利用人工蚁群系统中蚂蚁通过信息素留存寻找最优路径的机制,结合单只蚂蚁通过拾起、放下物体从而使物体聚堆的行为模式,合理设计放下概率、禁忌列表、信息素更新方式以及相应的参数.与基于进化模拟退火和随机竞争学习的码书设计算法相比,本文提出的算法能获得性能较好的码书,其峰值信噪比比传统的LBG算法提高超过2dB.  相似文献   

9.
罗雪晖  李霞  张基宏 《通信学报》2005,26(9):135-139
提出了一种基于混合蚁群算法的矢量量化码书设计算法。该算法首先通过自适应地调整截取转移概率的参数,加大蚁群算法的搜索最优解的力度;然后以蚁群算法搜索的结果作为初始解,利用改进的LBG算法作进一步的搜索,从而加快算法的收敛速度。实验结果表明,该算法不但大大提高码书性能,而且也缩短了运行时间,解码恢复图像能获得较高的主、客观质量。  相似文献   

10.
LSF参数的模拟退火法连接分裂矢量量化   总被引:1,自引:0,他引:1       下载免费PDF全文
鲍长春  卓力  王永会 《电子学报》2001,29(1):127-129
本文基于模拟退火算法,提出了一种全局最优的模拟退火法连接分裂矢量量化(SA-LSVQ)方案.实验表明,该算法与LBG法相比,有效地改善了LSVQ的性能.  相似文献   

11.
Recently, C.D. Bei and R.M. Gray (1985) used a partial distance search algorithm that reduces the computational complexity of the minimum distortion encoding for vector quantization. The effect of ordering the codevectors on the computational complexity of the algorithm is studied. It is shown that the computational complexity of this algorithm can be reduced further by ordering the codevectors according to the sizes of their corresponding clusters  相似文献   

12.
A pseudo-Gray code is an assignment of n-bit binary indexes to 2" points in a Euclidean space so that the Hamming distance between two points corresponds closely to the Euclidean distance. Pseudo-Gray coding provides a redundancy-free error protection scheme for vector quantization (VQ) of analog signals when the binary indexes are used as channel symbols on a discrete memoryless channel and the points are signal codevectors. Binary indexes are assigned to codevectors in a way that reduces the average quantization distortion introduced in the reproduced source vectors when a transmitted index is corrupted by channel noise. A globally optimal solution to this problem is generally intractable due to an inherently large computational complexity. A locally optimal solution, the binary switching algorithm, is introduced, based on the objective of minimizing a useful upper bound on the average system distortion. The algorithm yields a significant reduction in average distortion, and converges in reasonable running times. The sue of pseudo-Gray coding is motivated by the increasing need for low-bit-rate VQ-based encoding systems that operate on noisy channels, such as in mobile radio speech communications  相似文献   

13.
Li  Z. Lu  Z.-M. 《Electronics letters》2008,44(2):104-105
A new fast approach to the nearest codevector search for 3D mesh compression using an orthonormal transformed codebook is proposed. The algorithm uses the coefficients of an input vector along a set of orthonormal bases as the criteria to reject impossible codevectors. Compared to the full search algorithm, a great deal of computational time is saved without extra distortion and additional storage requirement. Simulation results demonstrate the effectiveness of the proposed algorithm in terms of computational complexity.  相似文献   

14.
The design of the optimal codebook for a given codebook size and input source is a challenging puzzle that remains to be solved. The key problem in optimal codebook design is how to construct a set of codevectors efficiently to minimize the average distortion. A minimax criterion of minimizing the maximum partial distortion is introduced in this paper. Based on the partial distortion theorem, it is shown that minimizing the maximum partial distortion and minimizing the average distortion will asymptotically have the same optimal solution corresponding to equal and minimal partial distortion. Motivated by the result, we incorporate the alternative minimax criterion into the on-line learning mechanism, and develop a new algorithm called minimax partial distortion competitive learning (MMPDCL) for optimal codebook design. A computation acceleration scheme for the MMPDCL algorithm is implemented using the partial distance search technique, thus significantly increasing its computational efficiency. Extensive experiments have demonstrated that compared with some well-known codebook design algorithms, the MMPDCL algorithm consistently produces the best codebooks with the smallest average distortions. As the codebook size increases, the performance gain becomes more significant using the MMPDCL algorithm. The robustness and computational efficiency of this new algorithm further highlight its advantages.  相似文献   

15.
A vector quantization (VQ) scheme with finite memory called dynamic finite-state vector quantization (DFSVQ) is presented. The encoder consists of a large codebook, so called super-codebook, where for each input vector a fixed number of its codevectors are chosen to generate a much smaller codebook (sub-codebook). This sub-codebook represents the best matching codevectors that could be found in the super-codebook for encoding the current input vector. The choice for the codevectors in the sub-codebook is based on the information obtained from the previously encoded blocks where directional conditional block probability (histogram) matrices are used in the selection of the codevectors. The index of the best matching codevector in the sub-codebook is transmitted to the receiver. An adaptive DFSVQ scheme is also proposed in which, when encoding an input vector, first the sub-codebook is searched for a matching codevector to satisfy a pre-specified waveform distortion. If such a codevector is not found in tile current sub-codebook then the whole super-codebook is checked for a better match. If a better match is found then a signaling flag along with the corresponding index of the codevector is transmitted to the receiver. Both the DFSVQ encoder and its adaptive version are implemented. Experimental results for several monochrome images with a super-codebook size of 256 or 512 and different sub-codebook sizes are presented  相似文献   

16.
The Hadamard transform-a tool for index assignment   总被引:2,自引:0,他引:2  
We show that the channel distortion for maximum-entropy encoders, due to noise on a binary-symmetric channel, is minimized if the vector quantizer can be expressed as a linear transform of a hypercube. The index assignment problem is regarded as a problem of linearizing the vector quantizer. We define classes of index assignments with related properties, within which the best index assignment is found by sorting, not searching. Two powerful algorithms for assigning indices to the codevectors of nonredundant coding systems are presented. One algorithm finds the optimal solution in terms of linearity, whereas the other finds a very good, but suboptimal, solution in a very short time  相似文献   

17.
In this paper, two efficient codebook searching algorithms for vector quantization (VQ) are presented. The first fast search algorithm utilizes the compactness property of signal energy on transform domain and the geometrical relations between the input vector and every codevector to eliminate those codevectors that have no chance to be the closest codeword of the input vector. It achieves a full search equivalent performance. As compared with other fast methods of the same kind, this algorithm requires the fewest multiplications and the least total times of distortion measurements. Then, a suboptimal searching method, which sacrifices the reconstructed signal quality to speed up the search of nearest neighbor, is presented. This algorithm performs the search process on predefined small subcodebooks instead of the whole codebook for the closest codevector. Experimental results show that this method not only needs less CPU time to encode an image but also encounters less loss of reconstructed signal quality than tree-structured VQ does  相似文献   

18.
A joint design scheme has been proposed to optimize the source encoder and the modulation signal constellation based on the minimization of the end-to-end distortion including both the quantization error and channel distortion. The proposed scheme first optimizes the vector quantization (VQ) codebook for a fixed modulation signal set, and then the modulation signals for the fixed VQ codebook. These two steps are iteratively repeated until they reach a local optimum solution. It has been shown that the performance of the proposed system can be enhanced by employing a new efficient mapping scheme between codevectors and modulation signals. Simulation results show that a jointly optimized system based on the proposed algorithms outperforms the conventional system based on a conventional quadrature amplitude modulation signal set and the VQ codebook designed for a noiseless channel  相似文献   

19.
An efficient method is presented for increasing the speed of the clustering process for channel-optimised vector quantisation (COVQ), in which the Hamming distances between the indices of codevectors are utilised. Simulation results demonstrate that with a small increase in pre-processing and memory costs, the training time of the new algorithm has been reduced significantly while the encoding quality remains unchanged  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号