首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 171 毫秒
1.
一种用于图象编码的神经网络及其改进算法   总被引:2,自引:0,他引:2  
本文用Kohonen的自组织特征映射神经网络设计图象矢量量化的码书,研究了网络的基本性质和学习算法的实现,提出了对学习算法的改进方法。实验结果表明,自组织特征映射神经网络能够有效地用于构造图象矢量量化的码书,算法简洁,实现快速,编码效果优良。  相似文献   

2.
简介矢量量化技术,描述了码书设计和码字搜索的原理.分析了自适应谐振网络相对于一般竞争网络的优点.即自适应谐振网络克服了一般竞争网络的"稳定性/可塑性困境"问题.文章归纳了自适应谐振神经网络的一般结构和学习算法.提出了基于自适应谐振神经网络的码书设计算法,并确定了相关的网络参数.基于自适应谐振神经网络的码书设计比一般竞争网络具有更好的效果.  相似文献   

3.
一种改进的自组织特征映射图像压缩算法   总被引:1,自引:0,他引:1  
王瑶  梁科 《无线电工程》2006,36(12):18-20
为改善矢量量化的码书性能,提高神经网络的学习效率,在分析Kohonen自组织特征映射算法的基础上,提出一种改进的自组织特征映射算法,并应用到图像的矢量量化中。新算法引入失真敏感参数,并对网络学习参数进行了优化。实验表明,在压缩比为51.2时,新算法恢复图像的峰峰信噪比达到34.66dB,较Kononen自组织特征映射算法提高3.57dB。  相似文献   

4.
论述了矢量化VQ,模式识别和神经网络(NN)的基本概念及三者关系。重点介绍了适合于VQ的NN模型及学习算法,给出了NNVQ和常规VQ的比较结果,并指出深入研究NNVQ的意义及方向。  相似文献   

5.
应用神经网络的图像分类矢量量化编码   总被引:3,自引:0,他引:3  
矢量量化作为一种有效的图像数据压缩技术,越来越受到人们的重视。设计矢量量化器的经典算法LBG算法,由于运算复杂,从而限制了矢量量化的实用性。本文讨论了应用神经网络实现的基于边缘特征分类的矢量量化技术。它是根据人的视觉系统对图象的边缘的敏感性,应用模式识别技术,在对图像编码前,以边缘为特征对图像内容分类,然后再对每类进行矢量量化。除特征提取是采用离散余弦变换外,图像的分类和矢量量化都是由神经网络完成  相似文献   

6.
提出一种基于动态时间规整(DTW)和改进的学习矢量量化(LoPLVQ)的神经网络的语音识别方法.该方法用动态时间规整算法先对语音信号进行时间规整,然后通过改进的学习矢量量化神经网络进行语音的分类识别.实验表明,新系统在大规模语音识别方面不仅能缩短训练时间,而且具有较高的识别率.  相似文献   

7.
图像矢量量化—频率敏感自组织特征映射算法   总被引:17,自引:0,他引:17  
用神经网络实现图像矢量量化是一种非常有效的方法,本文在分析自组织特征映射(SOFM)算法的基础上,提出了一种频率敏感自组织特征映射(FSOFM)算法,并对网络学习训练参数的优化进行了探讨。实验表明,FSOFM算法优于SOFM算法。  相似文献   

8.
矢量量化是一种高效的有损压缩技术,但其存在编码算法实现实时性不高的问题.为了提高编码算法在PC机上的执行效率,文中从现有的成熟矢量量化有效算法(基于不等式删除准则)入手,针对PC机上intel CPU的工作特点,分析了矢量量化算法优化的特点,提出了采用MMX指令等有效的优化方法.  相似文献   

9.
访问控制系统中风险量化具有不确定性,非线性等特点,无法确定具有良好效果的求解规则.本文将模糊理论、人工神经网络、小波分析及量子粒子群优化算法有机结合,提出了模糊小波神经网络(fuzzy wavelet neural network,Fuzzy WNN)的风险量化方法,通过模糊综合评判法对主体、客体等的属性信息进行评价量化,作为小波神经网络的输入量,小波神经网络的输出量为量化的风险值,并对小波神经网络的训练算法进行改进优化.仿真结果表明,本文提出的算法可对访问请求风险实现有效量化,克服现有的量化方法所存在的主观随意性大、结论模糊等缺陷.  相似文献   

10.
李霆  王东进  刘发林 《电讯技术》2007,47(1):151-153
将遗传算法与LBG算法相结合,得到了一种矢量量化码书设计算法.利用遗传算法的全局优化能力得到最优的矢量量化码书;同时,克服了传统遗传算法收敛速度慢的缺点.实验结果表明,文中提出的算法性能上优于LBG算法,且收敛速度较快.  相似文献   

11.
This paper evaluates the performance of an image compression system based on wavelet-based subband decomposition and vector quantization. The images are decomposed using wavelet filters into a set of subbands with different resolutions corresponding to different frequency bands. The resulting subbands are vector quantized using the Linde-Buzo-Gray (1980) algorithm and various fuzzy algorithms for learning vector quantization (FALVQ). These algorithms perform vector quantization by updating all prototypes of a competitive neural network through an unsupervised learning process. The quality of the multiresolution codebooks designed by these algorithms is measured on the reconstructed images belonging to the training set used for multiresolution codebook design and the reconstructed images from a testing set.  相似文献   

12.
Vector quantization with complexity costs   总被引:2,自引:0,他引:2  
Vector quantization is a data compression method by which a set of data points is encoded by a reduced set of reference vectors: the codebook. A vector quantization strategy is discussed that jointly optimizes distortion errors and the codebook complexity, thereby determining the size of the codebook. A maximum entropy estimation of the cost function yields an optimal number of reference vectors, their positions, and their assignment probabilities. The dependence of the codebook density on the data density for different complexity functions is investigated in the limit of asymptotic quantization levels. How different complexity measures influence the efficiency of vector quantizers is studied for the task of image compression. The wavelet coefficients of gray-level images are quantized, and the reconstruction error is measured. The approach establishes a unifying framework for different quantization methods like K-means clustering and its fuzzy version, entropy constrained vector quantization or topological feature maps, and competitive neural networks  相似文献   

13.
Requantization is a key technology for reducing the bit rate of a previously compressed data. When recompression ratio is high, the requantizer may cause unacceptable quality degradation. To improve the quality of the requantized image, an optimization scheme for the requantization codebook has been proposed. The proposed scheme constructs an optimal requantization codebook in an iterative manner for a given original quantization codebook of transmitter. The construction of codebook is iteratively repeated until they reach a local optimum solution. Our approach can be applied not only to the scalar quantization, but to any method which employs vector quantization-based system. Simulation results show that the optimized system based on the proposed algorithm outperforms the conventional system which is made without consideration of requantization. The proposed algorithm enables a reliable image communication over heterogeneous networks.  相似文献   

14.
This paper presents a new technique for designing a jointly optimized residual vector quantizer (RVQ). In conventional stage-by-stage design procedure, each stage codebook is optimized for that particular stage distortion and does not consider the distortion from the subsequent stages. However, the overall performance can be improved if each stage codebook is optimized by minimizing the distortion from the subsequent stage quantizers as well as the distortion from the previous stage quantizers. This can only be achieved when stage codebooks are jointly designed for each other. In this paper, the proposed codebook design procedure is based on a multilayer competitive neural network where each layer of this network represents one stage of the RVQ. The weight connecting these layers form the corresponding stage codebooks of the RVQ. The joint design problem of the RVQ's codebooks (weights of the multilayer competitive neural network) is formulated as a nonlinearly constrained optimization task which is based on a Lagrangian error function. This Lagrangian error function includes all the constraints that are imposed by the joint optimization of the codebooks. The proposed procedure seeks a locally optimal solution by iteratively solving the equations for this Lagrangian error function. Simulation results show an improvement in the performance of an RVQ when designed using the proposed joint optimization technique as compared to the stage-by-stage design, where both generalized Lloyd algorithm (GLA) and the Kohonen learning algorithm (KLA) were used to design each stage codebook independently, as well as the conventional joint-optimization technique  相似文献   

15.
This paper presents the development and evaluation of fuzzy vector quantization algorithms. These algorithms are designed to achieve the quality of vector quantizers provided by sophisticated but computationally demanding approaches, while capturing the advantages of the frequently used in practice k-means algorithm, such as speed, simplicity, and conceptual appeal. The uncertainty typically associated with clustering tasks is formulated in this approach by allowing the assignment of each training vector to multiple clusters in the early stages of the iterative codebook design process. A training vector assignment strategy is also proposed for the transition from the fuzzy mode, where each training vector can be assigned to multiple clusters, to the crisp mode, where each training vector can be assigned to only one cluster. Such a strategy reduces the dependence of the resulting codebook on the random initial codebook selection. The resulting algorithms are used in image compression based on vector quantization. This application provides the basis for evaluating the computational efficiency of the proposed algorithms and comparing the quality of the resulting codebook design with that provided by competing techniques.  相似文献   

16.
Multiple-input multiple-output (MIMO) wireless systems can achieve significant diversity and array gain by using transmit beamforming and receive combining techniques. In the absence of full channel knowledge at the transmitter, the transmit beamforming vector can be quantized at the receiver and sent to the transmitter using a low-rate feedback channel. In the literature, quantization algorithms for the beamforming vector are designed and optimized for a particular channel distribution, commonly the uncorrelated Rayleigh distribution. When the channel is not uncorrelated Rayleigh, however, these quantization strategies result in a degradation of the receive signal-to-noise ratio (SNR). In this paper, switched codebook quantization is proposed where the codebook is dynamically chosen based on the channel distribution. The codebook adaptation enables the quantization to exploit the spatial and temporal correlation inherent in the channel. The convergence properties of the codebook selection algorithm are studied assuming a block-stationary model for the channel. In the case of a nonstationary channel, it is shown using simulations that the selected codebook tracks the distribution of the channel resulting in improvements in SNR. Simulation results show that in the case of correlated channels, the SNR performance of the link can be significantly improved by adaptation, compared with nonadaptive quantization strategies designed for uncorrelated Rayleigh-fading channels  相似文献   

17.
A comparison of several vector quantization codebook generationapproaches   总被引:1,自引:0,他引:1  
A review and a performance comparison of several often-used vector quantization (VQ) codebook generation algorithms are presented. The codebook generation algorithms discussed include the Linde-Buzo-Gray (LBG) binary-splitting algorithm, the pairwise nearest-neighbor algorithm, the simulated annealing algorithm, and the fuzzy c-means clustering analysis algorithm. A new directed-search binary-splitting method which reduces the complexity of the LBG algorithm, is presented. Also, a new initial codebook selection method which can obtain a good initial codebook is presented. By using this initial codebook selection algorithm, the overall LBG codebook generation time can be reduced by a factor of 1.5-2.  相似文献   

18.
A joint design scheme has been proposed to optimize the source encoder and the modulation signal constellation based on the minimization of the end-to-end distortion including both the quantization error and channel distortion. The proposed scheme first optimizes the vector quantization (VQ) codebook for a fixed modulation signal set, and then the modulation signals for the fixed VQ codebook. These two steps are iteratively repeated until they reach a local optimum solution. It has been shown that the performance of the proposed system can be enhanced by employing a new efficient mapping scheme between codevectors and modulation signals. Simulation results show that a jointly optimized system based on the proposed algorithms outperforms the conventional system based on a conventional quadrature amplitude modulation signal set and the VQ codebook designed for a noiseless channel  相似文献   

19.
This paper discusses some algorithms to be used for the generation of an efficient and robust codebook for vector quantization (VQ). Some of the algorithms reduce the required codebook size by 4 or even 8 b to achieve the same level of performance as some of the popular techniques. This helps in greatly reducing the complexity of codebook generation and encoding. We also present a new adaptive tree search algorithm which improves the performance of any product VQ structure. Our results show an improvement of nearly 3 dB over the fixed rate search algorithm at a bit rate of 0.75 b/pixel  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号