首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到17条相似文献,搜索用时 187 毫秒
1.
汪涛  庄新华 《计算机学报》1993,16(2):97-105
本文提出了一种异联想记忆模型的优化学习算法.首先,我们将反映神经元网络性能的标准转化为一个易于控制的代价函数,从而将权值的确定过程自然地转化为一个全局最优化过程.优化过程采用了梯度下降技术.这种学习算法可以保证每个训练模式成为系统的稳定吸引子,并且具有优化意义上的最大吸引域.在理论上,我们讨论了异联想记忆模型的存储能力,训练模式的渐近稳定性和吸引域的范围.计算机实验结果充分说明了算法的有效性.  相似文献   

2.
模糊联想记忆网络的增强学习算法   总被引:6,自引:0,他引:6       下载免费PDF全文
针对 Kosko提出的最大最小模糊联想记忆网络存在的问题 ,通过对这种网络连接权学习规则的改进 ,给出了另一种权重学习规则 ,即把 Kosko的前馈模糊联想记忆模型发展成为模糊双向联想记忆模型 ,并由此给出了模糊快速增强学习算法 ,该算法能存储任意给定的多值训练模式对集 .其中对于存储二值模式对集 ,由于其连接权值取值 0或 1,因而该算法易于硬件电路和光学实现 .实验结果表明 ,模糊快速增强学习算法是行之有效的 .  相似文献   

3.
梁学斌  吴立德 《软件学报》1996,7(Z1):267-272
基于联想记忆各记忆模式的吸引域之间应保持大小平衡的思想.提出了设计Hopfield联想记忆网络的极大极小准则,即设计出的对称连接权阵应使得网络最小的记忆模式吸引域达到最大.首先提出了一种快速学习算法;再发展了一个启发性迭代学习算法,称为约束感知器优化学习算法.大量实验结果表明了本文学习算法的优越性.  相似文献   

4.
文章提出了一种基于遗传算法的按`位'加权双向联想记忆神经网络(BAM)的学习算法.根据判定BAM网络稳定模式和容错能力的充分条件,推出了求取按位加权BAM加权系数的优化目标函数,之后作者给出了求解此目标函数的遗传算法.二值图象模式存储、联想记忆的计算机实验结果表明,文中所提出的方法能有效地提高网络的存储能力和容错能力.  相似文献   

5.
具有期望容错域的前向掩蔽联想记忆模型的设计方法   总被引:2,自引:0,他引:2  
联想记忆的综合问题是目前没有很好解决的难题.文中用作者提出的通用前馈网络和排序学习算法,提出了一种设计具有期望容错域的前向掩蔽联想记忆模型的方法.该方法一般性地解决了信息空间上联想记忆的综合难题,使设计出的联想记忆模型具有任意期望的记忆样本容错域.  相似文献   

6.
复形态双向联想记忆网络及其性能分析   总被引:1,自引:0,他引:1  
基于复数环(C,∨,∧,+)提出了复形态双向联想记忆网络模型,给出了神经网络实现双向完全联想记忆的条件,并分析了自联想的存储能力、稳定性和收敛性及神经网络的抗噪声能力.当复数虚部取零时即为实数,实数域形态双向联想记忆网络就是复数域形态双向联想记忆网络的特例.最后通过仿真实验验证复形态双向联想记忆网络的有效性.  相似文献   

7.
基于任意给定训练集的离散型Hopfield网学习算法   总被引:2,自引:0,他引:2  
孟祥武  程虎 《软件学报》1998,9(3):213-216
本文提出了一个离散型Hopfield网联想记忆学习算法,该算法增加了训练样本的维数,因而能存储任意给定的训练模式集.实验结果也证明了该方法的有效性.  相似文献   

8.
本文对双向联想记忆(BAM)的学习与回忆过程进行了详细的分析。在学习过程中,先是运用自适应非对称BAM算法进行学习,进而采用设置印象门限的反复记忆算法进行学习,本文从理论上证明了印象门限与样本吸引域之间的关系,指出反复记忆方法的理论依据。回忆过程中,采用非零阈值函数的运行方程,提出了阈值学习方法,并且从理论上证明了非零阈值函数的运行方程的采用,可进一步扩大吸引域。为了进一步扩大网络的信息存储量,本文引入了并联的BAM结构。本文方法的采纳,使得BAM网络的信息存储量、误差校正能力等得到很大程度的提高。  相似文献   

9.
基于约束区域的连续时间联想记忆神经网络   总被引:2,自引:2,他引:0  
陶卿  方廷健  孙德敏 《计算机学报》1999,22(12):1253-1258
传统的联想记忆神经网络模型是根据联想记忆点设计权值。文中提出一种根据联想记忆点设计基于约束区域的神经网络模型,它保证了渐近稳定的平衡点集与样要点集相同,不渐近稳定的平衡点恰为实际的拒识状态,并且吸引域分布合理。它具有学习和遗忘能力,还具有记忆容量大和电路可实现优点,是理想的联想记忆器。  相似文献   

10.
训练模式摄动对模糊形态学联想记忆网络的影响   总被引:1,自引:1,他引:0  
众多学者研究的两类形态学联想记忆网络的存储能力、抗腐蚀/膨胀噪声的能力等性质几乎都相同。但是文中研究发现两类网络对训练模式摄动的鲁棒性差异很大。一类对训练模式摄动拥有好的鲁棒性,而另一类则较差。该研究结论能为形态学联想记忆网络的学习算法选择和训练模式采集设备的精度要求提供指导,对前期训练模式的获取过程提供警示。  相似文献   

11.
We present a study of generalised Hopfield networks for associative memory. By analysing the radius of attraction of a stable state, the Object Perceptron Learning Algorithm (OPLA) and OPLA scheme are proposed to store a set of sample patterns (vectors) in a generalised Hopfield network with their radii of attraction as large as we require. OPLA modifies a set of weights and a threshold in a way similar to the perceptron learning algorithm. The simulation results show that the OPLA scheme is more effective for associative memory than both the sum-of-outer produce scheme with a Hopfield network and the weighted sum-of-outer product scheme with an asymmetric Hopfield network.  相似文献   

12.
The paper offers a new kind of neural network for classifying binary patterns. Given the dimensionality of patterns, the memory capacity of the network grows exponentially with free parameter s. The paper considers the limitations for parameter s caused by the fact that greater values of demand large computer memory and decrease the basin of attraction we have. In contrast to similar models, the network enjoys larger memory capacity and better recognition capabilities—it can distinguish heavily distorted patterns and even cope with pattern correlation. The negative effect of the latter can be easily suppressed by taking a large enough value of s. A perceptron recognition system is considered to demonstrate the efficiency of the algorithm, yet the method is quite applicable in fully connected associative-memory networks. The article is published in the original.  相似文献   

13.
基于约束区域的BSB联想记忆模型   总被引:2,自引:0,他引:2  
提出一种根据联想记忆点设计基于约束区域的BSB(Brain-State-inm-a-Box)神经网络模型,它保证了渐近稳定的平衡点集与样本点集相同,不渐近稳定的平衡点恰为实际的拒识状态,并且吸引域分布合理,从而将ESB完善为理想的联想记忆器。  相似文献   

14.
本文得到了若干关于模拟反馈联想记忆各记忆模式的吸引域及其中每一点趋向相应记忆模式的指数收敛速度的估计结果,它们可用于高效模拟反馈联想记忆的性能评价以及综合过程.  相似文献   

15.
In a previous paper, the self-trapping network (STN) was introduced as more biologically realistic than attractor neural networks (ANNs) based on the Ising model. This paper extends the previous analysis of a one-dimensional (1-D) STN storing a single memory to a model that stores multiple memories and that possesses generalized sparse connectivity. The energy, Lyapunov function, and partition function derived for the 1-D model are generalized to the case of an attractor network with only near-neighbor synapses, coupled to a system that computes memory overlaps. Simulations reveal that 1) the STN dramatically reduces intra-ANN connectivity without severly affecting the size of basins of attraction, with fast self-trapping able to sustain attractors even in the absence of intra-ANN synapses; 2) the basins of attraction can be controlled by a single free parameter, providing natural attention-like effects; 3) the same parameter determines the memory capacity of the network, and the latter is much less dependent than a standard ANN on the noise level of the system; 4) the STN serves as a useful memory for some correlated memory patterns for which the standard ANN totally fails; 5) the STN can store a large number of sparse patterns; and 6) a Monte Carlo procedure, a competitive neural network, and binary neurons with thresholds can be used to induce self-trapping.  相似文献   

16.
Bidirectional associative memory (BAM) generalizes the associative memory (AM) to be capable of performing two-way recalling of pattern pairs. Asymmetric bidirectional associative memory (ABAM) is a variant of BAM relaxed with connection weight symmetry restriction and enjoys a much better performance than a conventional BAM structure. Higher-order associative memories (HOAMs) are reputed for their higher memory capacity than the first-order counterparts. The paper concerns the design of a second-order asymmetric bidirectional associative memory (SOABAM) with a maximal basin of attraction, whose extension to a HOABAM is possible and straightforward. First, a necessary and sufficient condition is derived for the connection weight matrix of SOABAM that can guarantee the recall of all prototype pattern pairs. A local training rule which is adaptive in the learning step size is formulated. Then derived is a theorem, designing a SOABAM further enlarging the quantities required to meet the complete recall theorem will enhance the capability of evolving a noisy pattern to converge to its association pattern vector without error. Based on this theorem, our algorithm is also modified to ensure each training pattern is stored with a basin of attraction as large as possible.  相似文献   

17.
In this article we present techniques for designing associative memories to be implemented by a class of synchronous discrete-time neural networks based on a generalization of the brain-state-in-a-box neural model. First, we address the local qualitative properties and global qualitative aspects of the class of neural networks considered. Our approach to the stability analysis of the equilibrium points of the network gives insight into the extent of the domain of attraction for the patterns to be stored as asymptotically stable equilibrium points and is useful in the analysis of the retrieval performance of the network and also for design purposes. By making use of the analysis results as constraints, the design for associative memory is performed by solving a constraint optimization problem whereby each of the stored patterns is guaranteed a substantial domain of attraction. The performance of the designed network is illustrated by means of three specific examples.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号