首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 9 毫秒
1.
In this paper, we develop a necessary and sufficient condition for a local minimum to be a global minimum to the vector quantization problem and present a competitive learning algorithm based on this condition which has two learning terms; the first term regulates the force of attraction between the synaptic weight vectors and the input patterns in order to reach a local minimum while the second term regulates the repulsion between the synaptic weight vectors and the input's gravity center to favor convergence to the global minimum This algorithm leads to optimal or near optimal solutions and it allows the network to escape from local minima during training. Experimental results in image compression demonstrate that it outperforms the simple competitive learning algorithm, giving better codebooks.  相似文献   

2.
A self-creating network effective in learning vector quantization, called RCN (Representation-burden Conservation Network) is developed. Each neuron in RCN is characterized by a measure of representation-burden. Conservation is achieved by bounding the summed representation-burden of all neurons at constant 1, as representation-burden values of all neurons are updated after each input presentation. We show that RCN effectively fulfills the conscience principle [1] and achieves biologically plausible self-development capability. In addition, conservation in representation-burden facilitates systematic derivations of learning parameters, including the adaptive learning rate control useful in accelerating the convergence as well as in improving node-utilization. Because it is smooth and incremental, RCN can overcome the stability-plasticity dilemma. Simulation results show that RCN displays superior performance over other competitive learning networks in minimizing the quantization error.  相似文献   

3.
对典型的竞争学习算法进行了研究和分析,提出了一种基于神经元获胜概率的概率敏感竞争虎法。与传统竞争学习算法只有一个神经元获胜而得到学习不同,PSCL算法按照各种凶的获胜概率并通过对失真距离的调整使每个神经元均得到不同的学习,可以有效地克服神经元欠利用问题。  相似文献   

4.
矢量量化与神经网络相结合的说话人识别系统   总被引:2,自引:0,他引:2  
李战明  王贞 《计算机工程与应用》2006,42(15):204-206,230
介绍了说话人识别系统的基本概念,在分析了传统VQ模型与神经网络模型的基础上,提出了一种VQ与神经网络相结合的说话人识别系统模型。通过提取出的特征参数(MFFC),建立系统模型,实验证明了该模型性能随着时间的变化有较好的稳定性。  相似文献   

5.
讨论了在语音编码中,应用神经网络技术进行矢量量化的算法。神经网络矢量量化算法可以压缩码本维数,提高码本搜索速度,从而优化矢量量化的效果。将这种优化的矢量量化算法应用于语音编码中,能降低运算复杂度,提高编码质量。  相似文献   

6.
矢量量化的误差竞争学习算法   总被引:7,自引:0,他引:7  
提出了误差竞争学习(Distortion copmpetitive learning,DCL)算法。该算法基于Gersho的矢量量化误差渐近理论的等误差原则,即当码本数趋于无穷大时,各区域子误差相等,使用这个原则作为最优码书设计的一个必要条件,并结合传统最优码书设计的两个必要条件,然后根据这3个必要条件:(1)最近邻规则;(2)中心准则;(3)各区域了误差近似相等设计最优码书,而在算法的实现中引入  相似文献   

7.
基于自组织特征映射神经网络的矢量量化   总被引:7,自引:0,他引:7       下载免费PDF全文
近年来,许多学者已经成功地将Kohonen的自组织特征映射(SOFM)神经网络应用于矢量量化(VQ)图象压缩编码,相对于传统的KLBG算法,基于的SOFM算法的两个主要缺点是计算量大和生成的码书性能较差因此为了改善码书性能,对基本的SOFM算法的权值调整方法作了一些改进,同时为了降低计算量,又在决定获得胜神经元的过程中,采用快速搜索算法,在将改进的算法用于矢量量化码书设计后,并把生成的码书用于图象  相似文献   

8.
龚成  卢冶  代素蓉  刘方鑫  陈新伟  李涛 《软件学报》2021,32(8):2391-2407
深度神经网络(deep neural network,简称DNN)量化是一种高效的模型压缩方法,使用少量位宽表示模型计算过程中的参数和中间结果数据.数据位宽会直接影响内存占用、计算效率和能耗.以往的模型量化研究缺乏有效的定量分析,这导致量化损失难以预测.提出了一种超低损失的DNN量化方法(ultra-low loss ...  相似文献   

9.
We present a new approach based on neural networks to solve the merging strategy problem for Cross-Lingual Information Retrieval (CLIR). In addition to language barrier issues in CLIR systems, how to merge a ranked list that contains documents in different languages from several text collections is also critical. We propose a merging strategy based on competitive learning to obtain a single ranking of documents merging the individual lists from the separate retrieved documents. The main contribution of the paper is to show the effectiveness of the Learning Vector Quantization (LVQ) algorithm in solving the merging problem. In order to investigate the effects of varying the number of codebook vectors, we have carried out several experiments with different values for this parameter. The results demonstrate that the LVQ algorithm is a good alternative merging strategy.  相似文献   

10.
提出误差选择竞争学习算法,它把遗传算法中的选择机制引入到矢量量化设计中,在使用竞争学习算法减小期望误差的前提下,利用选择机制调整各个区域的子误差从而进一步改善期望误差,实验结果表明,该算法较好地调整了各区域的子误差,克服局部最优  相似文献   

11.
Self-Organizing Maps and Learning Vector Quantization for Feature Sequences   总被引:2,自引:0,他引:2  
The Self-Organizing Map (SOM) and Learning Vector Quantization (LVQ) algorithms are constructed in this work for variable-length and warped feature sequences. The novelty is to associate an entire feature vector sequence, instead of a single feature vector, as a model with each SOM node. Dynamic time warping is used to obtain time-normalized distances between sequences with different lengths. Starting with random initialization, ordered feature sequence maps then ensue, and Learning Vector Quantization can be used to fine tune the prototype sequences for optimal class separation. The resulting SOM models, the prototype sequences, can then be used for the recognition as well as synthesis of patterns. Good results have been obtained in speaker-independent speech recognition.  相似文献   

12.
量化是压缩卷积神经网络、加速卷积神经网络推理的主要方法.现有的量化方法大多将所有层量化至相同的位宽,混合精度量化则可以在相同的压缩比下获得更高的准确率,但寻找混合精度量化策略是很困难的.为解决这种问题,提出了一种基于强化学习的卷积神经网络混合截断量化方法,使用强化学习的方法搜索混合精度量化策略,并根据搜索得到的量化策略混合截断权重数据后再进行量化,进一步提高了量化后网络的准确率.在ImageNet数据集上测试了ResNet18/50以及MobileNet-V2使用此方法量化前后的Top-1准确率,在COCO数据集上测试了YOLOV3网络量化前后的mAP.与HAQ, ZeroQ相比, MobileNet-V2网络量化至4位的Top-1准确率分别提高了2.7%和0.3%;与分层量化相比, YOLOV3网络量化至6位的mAP提高了2.6%.  相似文献   

13.
基于Kohonen自组织特征映射(SOFM)神经网络的矢量量化图像压缩编码是一种非常高效的方法,但其码字利用不均匀,某些神经元永远无法获胜而产生"死神经元"的问题仍然十分明显。在追求为使各个神经元能以较为均衡的几率获胜,尽量避免"死神经元"过程中,Kohonen SOFM-C很具代表性,它既能保持拓扑不变性映射又能最有效地避免"死神经元",是一种带"良心"的竞争学习方法。本文利用Kohonen SOFM-C码字利用更为均衡的优点,并针对SOFM在胜出神经元的邻域内神经元修改权值方法的不足,提出基于SOFM-C的辅助神经元自组织映射算法,此方法具有开放性,可随时添加入新的有效算法模块以达到更好的效果。并把该矢量量化算法应用于小波变换域,以获得更好的码书。仿真结果表明,该方法优于已有的SOFM方法。  相似文献   

14.
并行学习神经网络集成方法   总被引:23,自引:0,他引:23  
该文分析了神经网络集成中成员神经网络的泛化误差、成员神经网络之间的差异度对神经网络集成泛化误差的影响,提出了一种并行学习神经网络集成方法;对参与集成的成员神经网络,给出了一种并行训练方法,不仅满足了成员网络本身的精度要求,还满足了它与其余成员网络的差异性要求;另外,给出了一种并行确定集成成员神经网络权重方法.实验结果表明,使用该文的成员神经网络训练方法、成员神经网络集成方法能够构建有效的神经网络集成系统.  相似文献   

15.
一种基于神经网络集成的规则学习算法   总被引:8,自引:0,他引:8  
将神经网络集成与规则学习相结合,提出了一种基于神经网络集成的规则学习算法.该算法以神经网络集成作为规则学习的前端,利用其产生出规则学习所用的数据集,在此基础上进行规则学习.在UCl机器学习数据库上的实验结果表明,该算法可以产生泛化能力非常强的规则.  相似文献   

16.
A new vector quantization method (LBG-U) closely related to a particular class of neural network models (growing self-organizing networks) is presented. LBG-U consists mainly of repeated runs of the well-known LBG algorithm. Each time LBG converges, however, a novel measure of utility is assigned to each codebook vector. Thereafter, the vector with minimum utility is moved to a new location, LBG is run on the resulting modified codebook until convergence, another vector is moved, and so on. Since a strictly monotonous improvement of the LBG-generated codebooks is enforced, it can be proved that LBG-U terminates in a finite number of steps. Experiments with artificial data demonstrate significant improvements in terms of RMSE over LBG combined with only modestly higher computational costs.  相似文献   

17.
非线性空间几何收缩的分形图象压缩编码   总被引:2,自引:0,他引:2       下载免费PDF全文
在经典的空间几何线性均值收缩算法的基础上,提出了一种非线性空间几何收缩算法。由实验表明,该算法不仅能提高压缩比,而且对信噪比也有一定的改善。  相似文献   

18.
In a recent publication [1], it was shown that a biologically plausible RCN (Representation-burden Conservation Network) in which conservation is achieved by bounding the summed representation-burden of all neurons at constant 1, is effective in learning stationary vector quantization. Based on the conservation principle, a new approach for designing a dynamic RCN for processing both stationary and non-stationary inputs is introduced in this paper. We show that, in response to the input statistics changes, dynamic RCN improves its original counterpart in incremental learning capability as well as in self-organizing the network structure. Performance comparisons between dynamic RCN and other self-development models are also presented. Simulation results show that dynamic RCN is very effective in training a near-optimal vector quantizer in that it manages to keep a balance between the equiprobable and equidistortion criterion.  相似文献   

19.
We propose a new algorithm for vector quantization, the Activity Equalization Vector quantization (AEV). It is based on the winner takes all rule with an additional supervision of the average node activities over a training interval and a subsequent re-positioning of those nodes with low average activities. The re-positioning is aimed to both an exploration of the data space and a better approximation of already discovered data clusters by an equalization of the node activities. We introduce a learning scheme for AEV which requires as previous knowledge about the data only their bounding box. Using an example of Martinetz et al. [1], AEV is compared with the Neural Gas, Frequency Sensitive Competitive Learning (FSCL) and other standard algorithms. It turns out to converge much faster and requires less computational effort.  相似文献   

20.
基于LVQ的软件项目风险评估模型的研究   总被引:2,自引:1,他引:2  
以16种风险为基础,建立了一个新的软件项目风险评估模型,把以往每个软件项目的16种风险看做一个16×1维列矢量,并做为LVQ神经网络的训练矢量,对其进行聚类分析,最终把项目风险水平分为:风险水平很低、风险水平中等、风险水平很高等三个类别,并对项目风险水平做出预测。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号