共查询到19条相似文献,搜索用时 109 毫秒
1.
2.
3.
本文给出了一种新的图像矢量量化码书的优化设计方法——粒子对算法.在传统粒子群优化(Particle Swarm Optimization,PSO)算法的基础上,用两个粒子构成了群体规模较小的粒子对,在码书空间中搜索最佳码书.在每次迭代运算中,粒子对按先后顺序执行PSO算法中的速度更新、位置更新操作和标准LBG算法,并用误差较大的训练矢量代替越界的码字.此算法避免粒子陷入局部最优码书,较准确地记录和估计每个码字的最佳移动方向和历史路径,在训练矢量密集区域和稀疏区域合理地分配码字,从而使整体码书向全局最优解靠近.实验结果表明,本算法始终稳定地取得显著优于FKM、FRLVQ、FRLVQ-FVQ算法的性能,较好地解决了矢量量化中初始码书影响优化结果的问题,且在计算时间和收敛速度方面有相当的优势. 相似文献
4.
针对在LBG算法中存在初始码书的选择极易影响码书训练的收敛速度和最终码书性能的缺陷,提出了一种基于微粒群的矢量量化码书设计算法.首先产生具有一定全局性特点的初始码书,然后再应用LBG算法进行优化得到同时具有局部特性的码书.实验结果验证了该算法的合理性. 相似文献
5.
鉴于经典的LBG码书设计算法易陷入局部最优解,将量子粒子群优化算法应用到图像矢量量化码书设计中,提出一种基于量子粒子群的矢量量化码书设计算法(QPSO-VQ)。在该算法中,用粒子表示码书,用峰值信噪比(PSNR)作为算法的适应度函数,通过量子粒子群算法的更新公式来更新码书。实验结果表明,与经典的LBG码书设计算法和粒子群矢量量化码书设计算法相比,QPSO-VQ在解码图像的PSNR值和算法的稳定度等方面有比较明显的优势,可以获得性能较好的码书。 相似文献
6.
语音识别技术已在通信及控制等领域得到广泛应用,针对孤立词语音识别矢量量化中LBG算法对初始码书选择敏感,容易陷入局部最优、泛化能力不强的缺点,将免疫粒子群优化算法(IPSO)和LBG算法结合进行聚类分析,从而得到基于IPSO-LBG的码书设计方法,并将其用于基于离散隐马尔可夫模型(DHMM)的孤立词语音识别系统中。通过实验,与传统LBG算法的DHMM孤立词语音识别系统的识别结果相比,证明了改进的系统有较好的识别率和适应性。 相似文献
7.
8.
9.
10.
11.
Recently, C.D. Bei and R.M. Gray (1985) used a partial distance search algorithm that reduces the computational complexity of the minimum distortion encoding for vector quantization. The effect of ordering the codevectors on the computational complexity of the algorithm is studied. It is shown that the computational complexity of this algorithm can be reduced further by ordering the codevectors according to the sizes of their corresponding clusters 相似文献
12.
A pseudo-Gray code is an assignment of n -bit binary indexes to 2" points in a Euclidean space so that the Hamming distance between two points corresponds closely to the Euclidean distance. Pseudo-Gray coding provides a redundancy-free error protection scheme for vector quantization (VQ) of analog signals when the binary indexes are used as channel symbols on a discrete memoryless channel and the points are signal codevectors. Binary indexes are assigned to codevectors in a way that reduces the average quantization distortion introduced in the reproduced source vectors when a transmitted index is corrupted by channel noise. A globally optimal solution to this problem is generally intractable due to an inherently large computational complexity. A locally optimal solution, the binary switching algorithm, is introduced, based on the objective of minimizing a useful upper bound on the average system distortion. The algorithm yields a significant reduction in average distortion, and converges in reasonable running times. The sue of pseudo-Gray coding is motivated by the increasing need for low-bit-rate VQ-based encoding systems that operate on noisy channels, such as in mobile radio speech communications 相似文献
13.
A new fast approach to the nearest codevector search for 3D mesh compression using an orthonormal transformed codebook is proposed. The algorithm uses the coefficients of an input vector along a set of orthonormal bases as the criteria to reject impossible codevectors. Compared to the full search algorithm, a great deal of computational time is saved without extra distortion and additional storage requirement. Simulation results demonstrate the effectiveness of the proposed algorithm in terms of computational complexity. 相似文献
14.
Ce Zhu Lai-Man Po 《IEEE transactions on image processing》1998,7(10):1400-1409
The design of the optimal codebook for a given codebook size and input source is a challenging puzzle that remains to be solved. The key problem in optimal codebook design is how to construct a set of codevectors efficiently to minimize the average distortion. A minimax criterion of minimizing the maximum partial distortion is introduced in this paper. Based on the partial distortion theorem, it is shown that minimizing the maximum partial distortion and minimizing the average distortion will asymptotically have the same optimal solution corresponding to equal and minimal partial distortion. Motivated by the result, we incorporate the alternative minimax criterion into the on-line learning mechanism, and develop a new algorithm called minimax partial distortion competitive learning (MMPDCL) for optimal codebook design. A computation acceleration scheme for the MMPDCL algorithm is implemented using the partial distance search technique, thus significantly increasing its computational efficiency. Extensive experiments have demonstrated that compared with some well-known codebook design algorithms, the MMPDCL algorithm consistently produces the best codebooks with the smallest average distortions. As the codebook size increases, the performance gain becomes more significant using the MMPDCL algorithm. The robustness and computational efficiency of this new algorithm further highlight its advantages. 相似文献
15.
A vector quantization (VQ) scheme with finite memory called dynamic finite-state vector quantization (DFSVQ) is presented. The encoder consists of a large codebook, so called super-codebook, where for each input vector a fixed number of its codevectors are chosen to generate a much smaller codebook (sub-codebook). This sub-codebook represents the best matching codevectors that could be found in the super-codebook for encoding the current input vector. The choice for the codevectors in the sub-codebook is based on the information obtained from the previously encoded blocks where directional conditional block probability (histogram) matrices are used in the selection of the codevectors. The index of the best matching codevector in the sub-codebook is transmitted to the receiver. An adaptive DFSVQ scheme is also proposed in which, when encoding an input vector, first the sub-codebook is searched for a matching codevector to satisfy a pre-specified waveform distortion. If such a codevector is not found in tile current sub-codebook then the whole super-codebook is checked for a better match. If a better match is found then a signaling flag along with the corresponding index of the codevector is transmitted to the receiver. Both the DFSVQ encoder and its adaptive version are implemented. Experimental results for several monochrome images with a super-codebook size of 256 or 512 and different sub-codebook sizes are presented 相似文献
16.
The Hadamard transform-a tool for index assignment 总被引:2,自引:0,他引:2
Knagenhjelm P. Agrell E. 《IEEE transactions on information theory / Professional Technical Group on Information Theory》1996,42(4):1139-1151
We show that the channel distortion for maximum-entropy encoders, due to noise on a binary-symmetric channel, is minimized if the vector quantizer can be expressed as a linear transform of a hypercube. The index assignment problem is regarded as a problem of linearizing the vector quantizer. We define classes of index assignments with related properties, within which the best index assignment is found by sorting, not searching. Two powerful algorithms for assigning indices to the codevectors of nonredundant coding systems are presented. One algorithm finds the optimal solution in terms of linearity, whereas the other finds a very good, but suboptimal, solution in a very short time 相似文献
17.
In this paper, two efficient codebook searching algorithms for vector quantization (VQ) are presented. The first fast search algorithm utilizes the compactness property of signal energy on transform domain and the geometrical relations between the input vector and every codevector to eliminate those codevectors that have no chance to be the closest codeword of the input vector. It achieves a full search equivalent performance. As compared with other fast methods of the same kind, this algorithm requires the fewest multiplications and the least total times of distortion measurements. Then, a suboptimal searching method, which sacrifices the reconstructed signal quality to speed up the search of nearest neighbor, is presented. This algorithm performs the search process on predefined small subcodebooks instead of the whole codebook for the closest codevector. Experimental results show that this method not only needs less CPU time to encode an image but also encounters less loss of reconstructed signal quality than tree-structured VQ does 相似文献
18.
Jong-Ki Han Hyung-Myung Kim 《Communications, IEEE Transactions on》2001,49(5):816-825
A joint design scheme has been proposed to optimize the source encoder and the modulation signal constellation based on the minimization of the end-to-end distortion including both the quantization error and channel distortion. The proposed scheme first optimizes the vector quantization (VQ) codebook for a fixed modulation signal set, and then the modulation signals for the fixed VQ codebook. These two steps are iteratively repeated until they reach a local optimum solution. It has been shown that the performance of the proposed system can be enhanced by employing a new efficient mapping scheme between codevectors and modulation signals. Simulation results show that a jointly optimized system based on the proposed algorithms outperforms the conventional system based on a conventional quadrature amplitude modulation signal set and the VQ codebook designed for a noiseless channel 相似文献
19.
Jong-Ki Han Hyung-Myung Kim 《Electronics letters》1999,35(16):1305-1306
An efficient method is presented for increasing the speed of the clustering process for channel-optimised vector quantisation (COVQ), in which the Hamming distances between the indices of codevectors are utilised. Simulation results demonstrate that with a small increase in pre-processing and memory costs, the training time of the new algorithm has been reduced significantly while the encoding quality remains unchanged 相似文献