首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Most bidirectional associative memory (BAM) networks use a symmetrical output function for dual fixed-point behavior. In this paper, we show that by introducing an asymmetry parameter into a recently introduced chaotic BAM output function, prior knowledge can be used to momentarily disable desired attractors from memory, hence biasing the search space to improve recall performance. This property allows control of chaotic wandering, favoring given subspaces over others. In addition, reinforcement learning can then enable a dual BAM architecture to store and recall nonlinearly separable patterns. Our results allow the same BAM framework to model three different types of learning: supervised, reinforcement, and unsupervised. This ability is very promising from the cognitive modeling viewpoint. The new BAM model is also useful from an engineering perspective; our simulations results reveal a notable overall increase in BAM learning and recall performances when using a hybrid model with the general regression neural network (GRNN).   相似文献   

2.
Classical bidirectional associative memories (BAM) have poor memory storage capacity, are sensitive to noise, are subject to spurious steady states during recall, and can only recall bipolar patterns. In this paper, we introduce a new bidirectional hetero-associative memory model for true-color patterns that uses the associative model with dynamical synapses recently introduced in Vazquez and Sossa (Neural Process Lett, Submitted, 2008). Synapses of the associative memory could be adjusted even after the training phase as a response to an input stimulus. Propositions that guarantee perfect and robust recall of the fundamental set of associations are provided. In addition, we describe the behavior of the proposed associative model under noisy versions of the patterns. At last, we present some experiments aimed to show the accuracy of the proposed model with a benchmark of true-color patterns.  相似文献   

3.
This paper presents a new unsupervised attractor neural network, which, contrary to optimal linear associative memory models, is able to develop nonbipolar attractors as well as bipolar attractors. Moreover, the model is able to develop less spurious attractors and has a better recall performance under random noise than any other Hopfield type neural network. Those performances are obtained by a simple Hebbian/anti-Hebbian online learning rule that directly incorporates feedback from a specific nonlinear transmission rule. Several computer simulations show the model's distinguishing properties.  相似文献   

4.
Sensitivity to noise in bidirectional associative memory (BAM)   总被引:1,自引:0,他引:1  
Original Hebbian encoding scheme of bidirectional associative memory (BAM) provides a poor pattern capacity and recall performance. Based on Rosenblatt's perceptron learning algorithm, the pattern capacity of BAM is enlarged, and perfect recall of all training pattern pairs is guaranteed. However, these methods put their emphases on pattern capacity, rather than error correction capability which is another critical point of BAM. This paper analyzes the sensitivity to noise in BAM and obtains an interesting idea to improve noise immunity of BAM. Some researchers have found that the noise sensitivity of BAM relates to the minimum absolute value of net inputs (MAV). However, in this paper, the analysis on failure association shows that it is related not only to MAV but also to the variance of weights associated with synapse connections. In fact, it is a positive monotone increasing function of the quotient of MAV divided by the variance of weights. This idea provides an useful principle of improving error correction capability of BAM. Some revised encoding schemes, such as small variance learning for BAM (SVBAM), evolutionary pseudorelaxation learning for BAM (EPRLAB) and evolutionary bidirectional learning (EBL), have been introduced to illustrate the performance of this principle. All these methods perform better than their original versions in noise immunity. Moreover, these methods have no negative effect on the pattern capacity of BAM. The convergence of these methods is also discussed in this paper. If there exist solutions, EPRLAB and EBL always converge to a global optimal solution in the senses of both pattern capacity and noise immunity. However, the convergence of SVBAM may be affected by a preset function.  相似文献   

5.
Bidirectional associative memory (BAM) generalizes the associative memory (AM) to be capable of performing two-way recalling of pattern pairs. Asymmetric bidirectional associative memory (ABAM) is a variant of BAM relaxed with connection weight symmetry restriction and enjoys a much better performance than a conventional BAM structure. Higher-order associative memories (HOAMs) are reputed for their higher memory capacity than the first-order counterparts. The paper concerns the design of a second-order asymmetric bidirectional associative memory (SOABAM) with a maximal basin of attraction, whose extension to a HOABAM is possible and straightforward. First, a necessary and sufficient condition is derived for the connection weight matrix of SOABAM that can guarantee the recall of all prototype pattern pairs. A local training rule which is adaptive in the learning step size is formulated. Then derived is a theorem, designing a SOABAM further enlarging the quantities required to meet the complete recall theorem will enhance the capability of evolving a noisy pattern to converge to its association pattern vector without error. Based on this theorem, our algorithm is also modified to ensure each training pattern is stored with a basin of attraction as large as possible.  相似文献   

6.
Attractor networks have been one of the most successful paradigms in neural computation, and have been used as models of computation in the nervous system. Recently, we proposed a paradigm called 'latent attractors' where attractors embedded in a recurrent network via Hebbian learning are used to channel network response to external input rather than becoming manifest themselves. This allows the network to generate context-sensitive internal codes in complex situations. Latent attractors are particularly helpful in explaining computations within the hippocampus--a brain region of fundamental significance for memory and spatial learning. Latent attractor networks are a special case of associative memory networks. The model studied here consists of a two-layer recurrent network with attractors stored in the recurrent connections using a clipped Hebbian learning rule. The firing in both layers is competitive--K winners take all firing. The number of neurons allowed to fire, K, is smaller than the size of the active set of the stored attractors. The performance of latent attractor networks depends on the number of such attractors that a network can sustain. In this paper, we use signal-to-noise methods developed for standard associative memory networks to do a theoretical and computational analysis of the capacity and dynamics of latent attractor networks. This is an important first step in making latent attractors a viable tool in the repertoire of neural computation. The method developed here leads to numerical estimates of capacity limits and dynamics of latent attractor networks. The technique represents a general approach to analyse standard associative memory networks with competitive firing. The theoretical analysis is based on estimates of the dendritic sum distributions using Gaussian approximation. Because of the competitive firing property, the capacity results are estimated only numerically by iteratively computing the probability of erroneous firings. The analysis contains two cases: the simple case analysis which accounts for the correlations between weights due to shared patterns and the detailed case analysis which includes also the temporal correlations between the network's present and previous state. The latter case predicts better the dynamics of the network state for non-zero initial spurious firing. The theoretical analysis also shows the influence of the main parameters of the model on the storage capacity.  相似文献   

7.
A feedforward bidirectional associative memory   总被引:2,自引:0,他引:2  
In contrast to conventional feedback bidirectional associative memory (BAM) network models, a feedforward BAM network is developed based on a one-shot design algorithm of O(p(2)(n+m)) computational complexity, where p is the number of prototype pairs and n, m are the dimensions of the input/output bipolar vectors. The feedforward BAM is an n-p-m three-layer network of McCulloch-Pitts neurons with storage capacity 2(min{m,n}) and guaranteed perfect bidirectional recall. The overall network design procedure is fully scalable in the sense that any number p=/<2(min{m,n}) of bidirectional associations can be implemented. The prototype patterns may be arbitrarily correlated. With respect to inference performance, it is shown that the Hamming attractive radius of each prototype reaches the maximum possible value. Simulation studies and comparisons illustrate and support these theoretical developments.  相似文献   

8.
An analysis of high-capacity discrete exponential BAM   总被引:4,自引:0,他引:4  
An exponential bidirectional associative memory (eBAM) using an exponential encoding scheme is discussed. It has a higher capacity for pattern pair storage than conventional BAMs. A new energy function is defined. The associative memory takes advantage of the exponential nonlinearity in the evolution equations such that the signal-to-noise ratio (SNR) is significantly increased. The energy of the eBAM decreases as the recall process proceeds, ensuring the stability of the system. The increase of SNR consequently enhances the capacity of the BAM. The capacity of the exponential BAM is estimated.  相似文献   

9.
传统的两层二值双向联想记忆(BAM)网络因其结构的限制存在着存储容量有限、区分小差别模式和存储非正交模式能力不足的缺陷,结构上将其扩展至三层网络是一个有效的解决思路,但是三层二值BAM网络的学习是一个难题,而三层连续型BAM网络又存在处理二值问题不方便的问题。为了解决这些问题,提出一种三层结构的二值双向联想记忆网络,创新之处是采用了二值多层前向网络的MRⅡ算法实现了三层二值BAM网络的学习。实验结果表明,基于MRⅡ算法的三层二值BAM网络极大地提高了网络的存储容量和模式区分能力,同时保留了二值网络特定的优势,具有较高的理论与实用价值。  相似文献   

10.
Neural associative memories are perceptron-like single-layer networks with fast synaptic learning typically storing discrete associations between pairs of neural activity patterns. Previous work optimized the memory capacity for various models of synaptic learning: linear Hopfield-type rules, the Willshaw model employing binary synapses, or the BCPNN rule of Lansner and Ekeberg, for example. Here I show that all of these previous models are limit cases of a general optimal model where synaptic learning is determined by probabilistic Bayesian considerations. Asymptotically, for large networks and very sparse neuron activity, the Bayesian model becomes identical to an inhibitory implementation of the Willshaw and BCPNN-type models. For less sparse patterns, the Bayesian model becomes identical to Hopfield-type networks employing the covariance rule. For intermediate sparseness or finite networks, the optimal Bayesian learning rule differs from the previous models and can significantly improve memory performance. I also provide a unified analytical framework to determine memory capacity at a given output noise level that links approaches based on mutual information, Hamming distance, and signal-to-noise ratio.  相似文献   

11.
An iterative learning algorithm called PRLAB is described for the discrete bidirectional associative memory (BAM). Guaranteed recall of all training pairs is ensured by PRLAB. The proposed algorithm is significant in many ways. Unlike many existing iterative learning algorithms, PRLAB is not based on the gradient descent technique. It is a novel adaptation from the well-known relaxation method for solving a system of linear inequalities. The algorithm is very fast. Learning 200 random patterns in a 200-200 BAM takes only 20 epochs on the average. PRLAB is highly insensitive to learning parameters and the initial configuration of a BAM. It also offers high scalability for large applications by providing the same high performance when the number of training patterns are increased in proportion to the size of the BAM. An extensive performance analysis of the new learning algorithm is included.  相似文献   

12.
A general model for bidirectional associative memories   总被引:1,自引:0,他引:1  
This paper proposes a general model for bidirectional associative memories that associate patterns between the X-space and the Y-space. The general model does not require the usual assumption that the interconnection weight from a neuron in the X-space to a neuron in the Y-space is the same as the one from the Y-space to the X-space. We start by defining a supporting function to measure how well a state supports another state in a general bidirectional associative memory (GBAM). We then use the supporting function to formulate the associative recalling process as a dynamic system, explore its stability and asymptotic stability conditions, and develop an algorithm for learning the asymptotic stability conditions using the Rosenblatt perceptron rule. The effectiveness of the proposed model for recognition of noisy patterns and the performance of the model in terms of storage capacity, attraction, and spurious memories are demonstrated by some outstanding experimental results.  相似文献   

13.
陈松灿  朱梧 《软件学报》1998,9(11):814-819
提出了一个新的高阶双向联想记忆模型.它推广了由Tai及Jeng所提出的高阶双向联想记忆模型HOBAM(higher-order bidirectional associative memory)及修正的具有内连接的双向联想记忆模型MIBAM(modified intraconnected bidirectional associative memory),通过定义能量函数,证明了新模型在同步与异步更新方式下的稳定性,从而能够保证所有被训练模式对成为该模型的渐近稳定点.借助统计分析原理,估计了所提模型的存储容量.计算机模拟证实此模型不仅具有较高的存储容量,而且还具有较好的纠错能力.  相似文献   

14.
This paper analyzes noise sensitivity of bidirectional association memory (BAM) and shows that the anti-noise capability of BAM relates not only to the minimum absolute value of net inputs(MAV), as some researchers found, but also to the variance of weights associated with synapse connections. In fact, it is determined by the quotient of these two factors. On this base, a novel learning algorithm—small variance leaning for BAM(SVBAM) is proposed, which is to decrease the variance of the weights of synapse matrix. Simulation experiments show that the algorithm can decrease the variance of weights efficiently, therefore, noise immunity of BAM is improved. At the same time, perfect recall of all training pattern pairs still can be guaranteed by the algorithm.  相似文献   

15.
The local identical index (LII) associative memory (AM) proposed by the authors in a previous paper is a one-shot feedforward structure designed to exhibit no spurious attractors. In this paper we relax the latter design constraint in exchange for enlarged basins of attraction and we develop a family of modified LII AM networks that exhibit improved performance, particularly in memorizing highly correlated patterns. The new algorithm meets the requirement of no spurious attractors only in a local sense. Finally, we show that the modified LII family of networks can accommodate composite patterns of any size by storing (memorizing) only the basic (prime) prototype patterns. The latter property translates to low learning complexity and a simple network structure with significant memory savings. Simulation studies and comparisons illustrate and support the the optical developments.  相似文献   

16.
The effect of weight fault on associative networks   总被引:1,自引:1,他引:0  
In the past three decades, the properties of associative networks has been extensively investigated. However, most existing results focus on the fault-free networks only. In implementation, network faults can be exhibited in different forms, such as open weight fault and multiplicative weight noise. This paper studies the effect of weight fault on the performance of the bidirectional associative memory (BAM) model when multiplicative weight noise and open weight fault present. Assuming that connection weights are corrupted by these two common fault models, we study how many number of pattern pairs can be stored in a faulty BAM. Since one of important feature of associative network is error correction, we also study the number of pattern pairs can be stored in a faulty BAM when there are some errors in the initial stimulus pattern.  相似文献   

17.
内连式复值双向联想记忆模型及性能分析   总被引:3,自引:0,他引:3  
陈松灿  夏开军 《软件学报》2002,13(3):433-437
Lee的复域多值双向联想记忆模型(complex domain bidirectional associative memory,简称CDBAM)不仅将Kosko的实域BAM(bidirectional associative memory)推广至复域,而且推广至多值情形,以利于多值模式(如灰级图像等)间的联想.在此基础上,提出了一个新的推广模型:复域内连式多值双向联想记忆模型(intraconnected CDBAM,简称ICDBAM),通过定义的能量函数证明了它在同步与异步更新方式下的稳定性,从而保证所有训练样本对成为其稳定点,克服了CDBAM所存在的补码问题.计算机模拟证明了该模型比CDBAM具有更高的存储容量和更好的纠错性能.  相似文献   

18.
模糊联想记忆网络的增强学习算法   总被引:6,自引:0,他引:6       下载免费PDF全文
针对 Kosko提出的最大最小模糊联想记忆网络存在的问题 ,通过对这种网络连接权学习规则的改进 ,给出了另一种权重学习规则 ,即把 Kosko的前馈模糊联想记忆模型发展成为模糊双向联想记忆模型 ,并由此给出了模糊快速增强学习算法 ,该算法能存储任意给定的多值训练模式对集 .其中对于存储二值模式对集 ,由于其连接权值取值 0或 1,因而该算法易于硬件电路和光学实现 .实验结果表明 ,模糊快速增强学习算法是行之有效的 .  相似文献   

19.
本文对双向联想记忆(BAM)的学习与回忆过程进行了详细的分析。在学习过程中,先是运用自适应非对称BAM算法进行学习,进而采用设置印象门限的反复记忆算法进行学习,本文从理论上证明了印象门限与样本吸引域之间的关系,指出反复记忆方法的理论依据。回忆过程中,采用非零阈值函数的运行方程,提出了阈值学习方法,并且从理论上证明了非零阈值函数的运行方程的采用,可进一步扩大吸引域。为了进一步扩大网络的信息存储量,本文引入了并联的BAM结构。本文方法的采纳,使得BAM网络的信息存储量、误差校正能力等得到很大程度的提高。  相似文献   

20.
This paper discusses the bidirectional associative memory (BAM) model from the matched-filtering viewpoint and offers it a new interpretation. Our attention is focused on the problem of stability and attractivity of equilibrium states. Several sufficient and/or necessary conditions are presented. To improve the BAM performance, an exponential function is used to enhance the correlations between the binary vectors of the retrieval key and that of the stored pattern similar to the key. The modified model is shown to be asymptotically stable. Theoretical analysis and simulation results demonstrate that the modified model performs much better than the original BAM in terms of memory capacity and error correction capability.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号