首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Neural associative memories are perceptron-like single-layer networks with fast synaptic learning typically storing discrete associations between pairs of neural activity patterns. Previous work optimized the memory capacity for various models of synaptic learning: linear Hopfield-type rules, the Willshaw model employing binary synapses, or the BCPNN rule of Lansner and Ekeberg, for example. Here I show that all of these previous models are limit cases of a general optimal model where synaptic learning is determined by probabilistic Bayesian considerations. Asymptotically, for large networks and very sparse neuron activity, the Bayesian model becomes identical to an inhibitory implementation of the Willshaw and BCPNN-type models. For less sparse patterns, the Bayesian model becomes identical to Hopfield-type networks employing the covariance rule. For intermediate sparseness or finite networks, the optimal Bayesian learning rule differs from the previous models and can significantly improve memory performance. I also provide a unified analytical framework to determine memory capacity at a given output noise level that links approaches based on mutual information, Hamming distance, and signal-to-noise ratio.  相似文献   

2.
This paper presents a new unsupervised attractor neural network, which, contrary to optimal linear associative memory models, is able to develop nonbipolar attractors as well as bipolar attractors. Moreover, the model is able to develop less spurious attractors and has a better recall performance under random noise than any other Hopfield type neural network. Those performances are obtained by a simple Hebbian/anti-Hebbian online learning rule that directly incorporates feedback from a specific nonlinear transmission rule. Several computer simulations show the model's distinguishing properties.  相似文献   

3.
The problem of optimal asymmetric Hopfield-type associative memory (HAM) design based on perceptron-type learning algorithms is considered. It is found that most of the existing methods considered the design problem as either 1) finding optimal hyperplanes according to normal distance from the prototype vectors to the hyperplane surface or 2) obtaining weight matrix W=[w/sub ij/] by solving a constraint optimization problem. In this paper, we show that since the state space of the HAM consists of only bipolar patterns, i.e., V=(v/sub 1/,v/sub 2/,...,v/sub N/)/sup T//spl isin/{-1,+1}/sup N/, the basins of attraction around each prototype (training) vector should be expanded by using Hamming distance measure. For this reason, in this paper, the design problem is considered from a different point of view. Our idea is to systematically increase the size of the training set according to the desired basin of attraction around each prototype vector. We name this concept the higher order Hamming stability and show that conventional minimum-overlap algorithm can be modified to incorporate this concept. Experimental results show that the recall capability as well as the number of spurious memories are all improved by using the proposed method. Moreover, it is well known that setting all self-connections w/sub ii//spl forall/i to zero has the effect of reducing the number of spurious memories in state space. From the experimental results, we find that the basin width around each prototype vector can be enlarged by allowing nonzero diagonal elements on learning of the weight matrix W. If the magnitude of w/sub ii/ is small for all i, then the condition w/sub ii/=0/spl forall/i can be relaxed without seriously affecting the number of spurious memories in the state space. Therefore, the method proposed in this paper can be used to increase the basin width around each prototype vector with the cost of slightly increasing the number of spurious memories in the state space.  相似文献   

4.
In this paper, we investigate the associative memory in recurrent neural networks, based on the model of evolving neural networks proposed by Nolfi, Miglino and Parisi.Experimentally developed network has highly asymmetric synaptic weights and dilute connections, quite different from those of the Hopfield model.Some results on the effect of learning efficiency on the evolution are also presented.  相似文献   

5.
A bidirectional heteroassociative memory for binary and grey-level patterns   总被引:2,自引:0,他引:2  
Typical bidirectional associative memories (BAM) use an offline, one-shot learning rule, have poor memory storage capacity, are sensitive to noise, and are subject to spurious steady states during recall. Recent work on BAM has improved network performance in relation to noisy recall and the number of spurious attractors, but at the cost of an increase in BAM complexity. In all cases, the networks can only recall bipolar stimuli and, thus, are of limited use for grey-level pattern recall. In this paper, we introduce a new bidirectional heteroassociative memory model that uses a simple self-convergent iterative learning rule and a new nonlinear output function. As a result, the model can learn online without being subject to overlearning. Our simulation results show that this new model causes fewer spurious attractors when compared to others popular BAM networks, for a comparable performance in terms of tolerance to noise and storage capacity. In addition, the novel output function enables it to learn and recall grey-level patterns in a bidirectional way.  相似文献   

6.
We address the problem of training relaxation labeling processes, a popular class of parallel iterative procedures widely employed in pattern recognition and computer vision. The approach discussed here is entirely based on a theory of consistency developed by Hummel and Zucker, and contrasts with a recently introduced learning stratery which can be regarded as heteroassociative, i.e., what is actually learned is the association between patterns rather than the patterns themselves. The proposed learning model is instead autoassociative and involves making a set of training patterns consistent, in the sense rigorously defined by Hummel and Zucker, this implies that they become local attractors of the relaxation labeling dynamical system. The learning problem is formulated in terms of solving a system of linear inequalities, and a straightforward iterative algorithm is presented to accomplish this. The attractive feature of this algorithm is that it solves the system when it admits a solution, and automatically yields the best approximation solution when this is not the case. The learning model described here allows one to view the relaxation labeling process as a kind of asymmetric associative memory, the effectiveness of which is demonstrated experimentally.  相似文献   

7.
A double-pattern associative memory neural network with “pattern loop” is proposed. It can store 2N bit bipolar binary patterns up to the order of 2^2N , retrieve part or all of the stored patterns which all have the minimum Hamming distance with input pattern, completely eliminate spurious patterns, and has higher storing efficiency and reliability than conventional associative memory. The length of a pattern stored in this associative memory can be easily extended from 2N to κN.  相似文献   

8.
自反馈神经网络的椭球学习算法   总被引:4,自引:0,他引:4  
张铃  张钹 《计算机学报》1994,17(9):676-681
本文讨论自反馈神经网络的学习问题,指出联想记忆的神经网络的学习可以化为某种规划(优化)的问题来解,于是可借用规划数学中发展得成熟的优化技术来解自反馈神经网络的学习问题,文中给出一种称为椭球算法的学习方法,其计算复杂性是多项式型。  相似文献   

9.
The objective of this paper is to to resolve important issues in artificial neural nets-exact recall and capacity in multilayer associative memories. These problems have imposed restrictions on coding strategies. We propose the following triple-layered hybrid neural network: the first synapse is a one-shot associative memory using the modified Kohonen's adaptive learning algorithm with arbitrary input patterns; the second one is Kosko's bidirectional associative memory consisting of orthogonal input/output basis vectors such as Walsh series satisfying the strict continuity condition; and finally, the third one is a simple one-shot associative memory with arbitrary output images. A mathematical framework based on the relationship between energy local minima (capacity of the neural net) and noise-free recall is established. The robust capacity conditions of this multilayer associative neural network that lead to forming the local minima of the energy function at the exact training pairs are derived. The chosen strategy not only maximizes the total number of stored images but also completely relaxes any code-dependent conditions of the learning pairs.  相似文献   

10.
关联优化存储下Hopfield网络的临界存储   总被引:1,自引:0,他引:1  
本文通过对Hopfield神经网络存储相关信息的联想稳定性分析,给出了选取存储样本的优化规则,并得出Hopfield神经网络在优化存储规则下各存储样本均能纠一错的临床存储容量约为0.5N,比随机选取存储本的容量要强得多。  相似文献   

11.
Attractor networks have been one of the most successful paradigms in neural computation, and have been used as models of computation in the nervous system. Recently, we proposed a paradigm called 'latent attractors' where attractors embedded in a recurrent network via Hebbian learning are used to channel network response to external input rather than becoming manifest themselves. This allows the network to generate context-sensitive internal codes in complex situations. Latent attractors are particularly helpful in explaining computations within the hippocampus--a brain region of fundamental significance for memory and spatial learning. Latent attractor networks are a special case of associative memory networks. The model studied here consists of a two-layer recurrent network with attractors stored in the recurrent connections using a clipped Hebbian learning rule. The firing in both layers is competitive--K winners take all firing. The number of neurons allowed to fire, K, is smaller than the size of the active set of the stored attractors. The performance of latent attractor networks depends on the number of such attractors that a network can sustain. In this paper, we use signal-to-noise methods developed for standard associative memory networks to do a theoretical and computational analysis of the capacity and dynamics of latent attractor networks. This is an important first step in making latent attractors a viable tool in the repertoire of neural computation. The method developed here leads to numerical estimates of capacity limits and dynamics of latent attractor networks. The technique represents a general approach to analyse standard associative memory networks with competitive firing. The theoretical analysis is based on estimates of the dendritic sum distributions using Gaussian approximation. Because of the competitive firing property, the capacity results are estimated only numerically by iteratively computing the probability of erroneous firings. The analysis contains two cases: the simple case analysis which accounts for the correlations between weights due to shared patterns and the detailed case analysis which includes also the temporal correlations between the network's present and previous state. The latter case predicts better the dynamics of the network state for non-zero initial spurious firing. The theoretical analysis also shows the influence of the main parameters of the model on the storage capacity.  相似文献   

12.
The aim of this paper is to investigate storing and recalling performances of embedded patterns on associative memory. The associative memory is composed of quaternionic multistate Hopfield neural network. The state of a neuron in the network is described by three kinds of discretized phase with fixed amplitude. These phases are set to discrete values with arbitrary divide size. Hebbian rule and projection rule are used for storing patterns to the network. Recalling performance is evaluated through storing random patterns with changing the divide size of the phases in a neuron. Color images are also embedded and their noise tolerance is explored.  相似文献   

13.
Discusses the learning problem of neural networks with self-feedback connections and shows that when the neural network is used as associative memory, the learning problem can be transformed into some sort of programming (optimization) problem. Thus, the rather mature optimization technique in programming mathematics can be used for solving the learning problem of neural networks with self-feedback connections. Two learning algorithms based on programming technique are presented. Their complexity is just polynomial. Then, the optimization of the radius of attraction of the training samples is discussed using quadratic programming techniques and the corresponding algorithm is given. Finally, the comparison is made between the given learning algorithm and some other known algorithms  相似文献   

14.
In this article we present the so‐called continuous classifying associative memory, able to store continuous patterns avoiding the problems of spurious states and data dependency. This is a memory model based on our previously developed classifying associative memory, which enables continuous patterns to be stored and recovered. We will also show that the behavior of this continuous classifying associative memory may be adjusted to some predetermined goals by selecting some internal operating functions. © 2002 Wiley Periodicals, Inc.  相似文献   

15.
Neural associative memory storing gray-coded gray-scale images   总被引:2,自引:0,他引:2  
We present a neural associative memory storing gray-scale images. The proposed approach is based on a suitable decomposition of the gray-scale image into gray-coded binary images, stored in brain-state-in-a-box-type binary neural networks. Both learning and recall can be implemented by parallel computation, with time saving. The learning algorithm, used to store the binary images, guarantees asymptotic stability of the stored patterns, low computational cost, and control of the weights precision. Some design examples and computer simulations are presented to show the effectiveness of the proposed method.  相似文献   

16.
传统的两层二值双向联想记忆(BAM)网络因其结构的限制存在着存储容量有限、区分小差别模式和存储非正交模式能力不足的缺陷,结构上将其扩展至三层网络是一个有效的解决思路,但是三层二值BAM网络的学习是一个难题,而三层连续型BAM网络又存在处理二值问题不方便的问题。为了解决这些问题,提出一种三层结构的二值双向联想记忆网络,创新之处是采用了二值多层前向网络的MRⅡ算法实现了三层二值BAM网络的学习。实验结果表明,基于MRⅡ算法的三层二值BAM网络极大地提高了网络的存储容量和模式区分能力,同时保留了二值网络特定的优势,具有较高的理论与实用价值。  相似文献   

17.
A general model for bidirectional associative memories   总被引:1,自引:0,他引:1  
This paper proposes a general model for bidirectional associative memories that associate patterns between the X-space and the Y-space. The general model does not require the usual assumption that the interconnection weight from a neuron in the X-space to a neuron in the Y-space is the same as the one from the Y-space to the X-space. We start by defining a supporting function to measure how well a state supports another state in a general bidirectional associative memory (GBAM). We then use the supporting function to formulate the associative recalling process as a dynamic system, explore its stability and asymptotic stability conditions, and develop an algorithm for learning the asymptotic stability conditions using the Rosenblatt perceptron rule. The effectiveness of the proposed model for recognition of noisy patterns and the performance of the model in terms of storage capacity, attraction, and spurious memories are demonstrated by some outstanding experimental results.  相似文献   

18.
This paper proposes a neural network that stores and retrieves sparse patterns categorically, the patterns being random realizations of a sequence of biased (0,1) Bernoulli trials. The neural network, denoted as categorizing associative memory, consists of two modules: 1) an adaptive classifier (AC) module that categorizes input data; and 2) an associative memory (AM) module that stores input patterns in each category according to a Hebbian learning rule, after the AC module has stabilized its learning of that category. We show that during training of the AC module, the weights in the AC module belonging to a category converge to the probability of a “1” occurring in a pattern from that category. This fact is used to set the thresholds of the AM module optimally without requiring any a priori knowledge about the stored patterns  相似文献   

19.
基于约束区域的连续时间联想记忆神经网络   总被引:2,自引:2,他引:0  
陶卿  方廷健  孙德敏 《计算机学报》1999,22(12):1253-1258
传统的联想记忆神经网络模型是根据联想记忆点设计权值。文中提出一种根据联想记忆点设计基于约束区域的神经网络模型,它保证了渐近稳定的平衡点集与样要点集相同,不渐近稳定的平衡点恰为实际的拒识状态,并且吸引域分布合理。它具有学习和遗忘能力,还具有记忆容量大和电路可实现优点,是理想的联想记忆器。  相似文献   

20.
A sparse two-Dimension distance weighted approach for improving the performance of exponential correlation associative memory (ECAM) and modified exponential correlation associative memory (MECAM) is presented in this paper. The approach is inspired by biological visual perception mechanism and extensively existing sparse small-world network phenomenon. By means of the approach, the two new associative memory neural networks, i.e., distance-based sparse ECAM (DBS-ECAM) and distance-based sparse MECAM (DBS-MECAM), are induced by introducing both the decaying two-Dimension distance factor and small-world architecture into ECAM and MECAM’s evolution rule for image processing application. Such a new configuration can reduce the connection complexity of conventional fully connected associative memories so that makes AM’ VLSI implementation easier. More importantly, the experiments performed on the binary visual images show DBS-ECAM and DBS-MECAM can learn and recognize patterns more effectively than ECAM and MECAM, respectively.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号