首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 250 毫秒
1.
During learning of overlapping input patterns in an associative memory, recall of previously stored patterns can interfere with the learning of new patterns. Most associative memory models avoid this difficulty by ignoring the effect of previously modified connections during learning, by clamping network activity to the patterns to be learned. Through the interaction of experimental and modeling techniques, we now have evidence to suggest that a somewhat analogous approach may have been taken by biology within the olfactory cerebral cortex. Specifically we have recently discovered that the naturally occurring neuromodulator acetylcholine produces a variety of effects on cortical cells and circuits which, when taken together, can prevent memory interference in a biologically realistic memory model. Further, it has been demonstrated that these biological mechanisms can actually improve the memory storage performance of previously published abstract neural network associative memory models.  相似文献   

2.
I review recent progress on the associative memory model, which is a kind of neural network model. First, I introduce this model and a mathematical theory called statistical neurodynamics describing its properties. Next, I discuss an associative memory model with hierarchically correlated memory patterns. Initially, in this model, the state approaches a mixed state that is a superposition of memory patterns. After that, it diverges from the mixed state, and finally converges to a memory pattern. I show that this retrieval dynamics can qualitatively replicate the temporal dynamics of face-responsive neurons in the inferior temporal cortex, which is considered to be the final stage of visual perception in the brain. Finally, I show an unexpected link between associative memory and mobile phones (CDMA). The mathematical structure of the CDMA multi-user detection problem resembles that of the associative memory model. It enables us to apply a theoretical framework of the associative memory model to CDMA.  相似文献   

3.
基于约束区域的连续时间联想记忆神经网络   总被引:2,自引:2,他引:0  
陶卿  方廷健  孙德敏 《计算机学报》1999,22(12):1253-1258
传统的联想记忆神经网络模型是根据联想记忆点设计权值。文中提出一种根据联想记忆点设计基于约束区域的神经网络模型,它保证了渐近稳定的平衡点集与样要点集相同,不渐近稳定的平衡点恰为实际的拒识状态,并且吸引域分布合理。它具有学习和遗忘能力,还具有记忆容量大和电路可实现优点,是理想的联想记忆器。  相似文献   

4.
Much evidence indicates that the perirhinal cortex is involved in the familiarity discrimination aspect of recognition memory. It has been previously shown under selective conditions that neural networks performing familiarity discrimination can achieve very high storage capacity, being able to deal with many times more stimuli than associative memory networks can in associative recall. The capacity of associative memories for recall has been shown to be highly dependent on the sparseness of coding. However, previous work on the networks of Bogacz et al, Norman and O'Reilly and Sohal and Hasselmo that model familiarity discrimination in the perirhinal cortex has not investigated the effects of the sparseness of encoding on capacity. This paper explores how sparseness of coding influences the capacity of each of these published models and establishes that sparse coding influences the capacity of the different models in different ways. The capacity of the Bogacz et al model can be made independent of the sparseness of coding. Capacity increases as coding becomes sparser for a simplified version of the neocortical part of the Norman and O'Reilly model, whereas capacity decreases as coding becomes sparser for a simplified version of the Sohal and Hasselmo model. Thus in general, and in contrast to associative memory networks, sparse encoding results in little or no advantage for the capacity of familiarity discrimination networks. Hence it may be less important for coding to be sparse in the perirhinal cortex than it is in the hippocampus. Additionally, it is established that the capacities of the networks are strongly dependent on the precise form of the learning rules (synaptic plasticity) used in the network. This finding indicates that the precise characteristics of synaptic plastic changes in the real brain are likely to have major influences on storage capacity.  相似文献   

5.
醉庆生物神经突触特性的基础上,提出了非线性神经突触神经元的概念,并以此为根据构造了一种可自学习的联想记忆神经网络模型。这种模型可以按照Hebb规则进行学习,学习机制由网络本身完成。在此模型中,由于非线性权重的引入,使此神经网络模型能以简单的结构实现网络的自学习功能。文中对网络的记忆容量和此种网络在以特定的学习方式学习后与Hopfield网络的等效性方面进行了讨论。试验表明,此种网络模型结构是有效的。  相似文献   

6.
Hebbian heteroassociative learning is inherently asymmetric. Storing a forward association, from item A to item B, enables recall of B (given A), but does not permit recall of A (given B). Recurrent networks can solve this problem by associating A to B and B back to A. In these recurrent networks, the forward and backward associations can be differentially weighted to account for asymmetries in recall performance. In the special case of equal strength forward and backward weights, these recurrent networks can be modeled as a single autoassociative network where A and B are two parts of a single, stored pattern. We analyze a general, recurrent neural network model of associative memory and examine its ability to fit a rich set of experimental data on human associative learning. The model fits the data significantly better when the forward and backward storage strengths are highly correlated than when they are less correlated. This network-based analysis of associative learning supports the view that associations between symbolic elements are better conceptualized as a blending of two ideas into a single unit than as separately modifiable forward and backward associations linking representations in memory.  相似文献   

7.
联想记忆网络是一种反馈型神经网络。由于反馈型网络会收敛于某个稳定状态,因此,可用于联想记忆。神经网络具有高度的并行处理能力和极强的非线性映射能力,可以实现故障与征兆之间复杂的非线性映射关系,因此在机械故障诊断领域中显示了很大的应用潜力。本文以模拟人脑由部分记忆而联想整体的特点为基础,通过引入联想记忆衰减因子,改进神经网络结构和学习算法.应用于系统的故障诊断。  相似文献   

8.
The objective of this paper is to to resolve important issues in artificial neural nets-exact recall and capacity in multilayer associative memories. These problems have imposed restrictions on coding strategies. We propose the following triple-layered hybrid neural network: the first synapse is a one-shot associative memory using the modified Kohonen's adaptive learning algorithm with arbitrary input patterns; the second one is Kosko's bidirectional associative memory consisting of orthogonal input/output basis vectors such as Walsh series satisfying the strict continuity condition; and finally, the third one is a simple one-shot associative memory with arbitrary output images. A mathematical framework based on the relationship between energy local minima (capacity of the neural net) and noise-free recall is established. The robust capacity conditions of this multilayer associative neural network that lead to forming the local minima of the energy function at the exact training pairs are derived. The chosen strategy not only maximizes the total number of stored images but also completely relaxes any code-dependent conditions of the learning pairs.  相似文献   

9.
In this paper, we propose a star-like weakly connected memristive neural network which is organized in such a way that each cell only interacts with the central cells. By using the describing function method and Malkin’s theorem the phase deviation of this dynamical network is obtained. And then, under the Hebbian learning rule the phase deviation is designed as a desired model for associative memory. Moreover, we take the store and recall of digital images as an example to demonstrate the performance of associative memory. The main contribution of this paper is supply a useful mechanism which the new potential circuit element memristor can be used to realize the associative.  相似文献   

10.
A novel neural network is proposed in this paper for realizing associative memory. The main advantage of the neural network is that each prototype pattern is stored if and only if as an asymptotically stable equilibrium point. Furthermore, the basin of attraction of each desired memory pattern is distributed reasonably (in the Hamming distance sense), and an equilibrium point that is not asymptotically stable is really the state that cannot be recognized. The proposed network also has a high storage as well as the capability of learning and forgetting, and all its components can be implemented. The network considered is a very simple linear system with a projection on a closed convex set spanned by the prototype patterns. The advanced performance of the proposed network is demonstrated by means of simulation of a numerical example.  相似文献   

11.
Attractor networks have been one of the most successful paradigms in neural computation, and have been used as models of computation in the nervous system. Recently, we proposed a paradigm called 'latent attractors' where attractors embedded in a recurrent network via Hebbian learning are used to channel network response to external input rather than becoming manifest themselves. This allows the network to generate context-sensitive internal codes in complex situations. Latent attractors are particularly helpful in explaining computations within the hippocampus--a brain region of fundamental significance for memory and spatial learning. Latent attractor networks are a special case of associative memory networks. The model studied here consists of a two-layer recurrent network with attractors stored in the recurrent connections using a clipped Hebbian learning rule. The firing in both layers is competitive--K winners take all firing. The number of neurons allowed to fire, K, is smaller than the size of the active set of the stored attractors. The performance of latent attractor networks depends on the number of such attractors that a network can sustain. In this paper, we use signal-to-noise methods developed for standard associative memory networks to do a theoretical and computational analysis of the capacity and dynamics of latent attractor networks. This is an important first step in making latent attractors a viable tool in the repertoire of neural computation. The method developed here leads to numerical estimates of capacity limits and dynamics of latent attractor networks. The technique represents a general approach to analyse standard associative memory networks with competitive firing. The theoretical analysis is based on estimates of the dendritic sum distributions using Gaussian approximation. Because of the competitive firing property, the capacity results are estimated only numerically by iteratively computing the probability of erroneous firings. The analysis contains two cases: the simple case analysis which accounts for the correlations between weights due to shared patterns and the detailed case analysis which includes also the temporal correlations between the network's present and previous state. The latter case predicts better the dynamics of the network state for non-zero initial spurious firing. The theoretical analysis also shows the influence of the main parameters of the model on the storage capacity.  相似文献   

12.
混沌是不含外加随机因素的完全确定性的系统表现出来的界于规则和随机之间的内秉随机行为。脑神经系统是由神经细胞组成的网络。类似于人脑思维的人工神经网络与冯·诺依曼计算机相比,在信息处理方面有很大的优越性。混沌和神经网络相互融合的研究是从90年代开始的,其主要的目标是通过分析大脑的混沌现象,建立含有混沌动力学的神经网络模型(即混沌神经网络模型),将混沌的遍历性、对初始值敏感等特点与神经网络的非线性、自适应、并行处理优势相结合,  相似文献   

13.
Most bidirectional associative memory (BAM) networks use a symmetrical output function for dual fixed-point behavior. In this paper, we show that by introducing an asymmetry parameter into a recently introduced chaotic BAM output function, prior knowledge can be used to momentarily disable desired attractors from memory, hence biasing the search space to improve recall performance. This property allows control of chaotic wandering, favoring given subspaces over others. In addition, reinforcement learning can then enable a dual BAM architecture to store and recall nonlinearly separable patterns. Our results allow the same BAM framework to model three different types of learning: supervised, reinforcement, and unsupervised. This ability is very promising from the cognitive modeling viewpoint. The new BAM model is also useful from an engineering perspective; our simulations results reveal a notable overall increase in BAM learning and recall performances when using a hybrid model with the general regression neural network (GRNN).   相似文献   

14.
A neural network consisting of a gallery of independent subnetworks is developed for associative memory which stores and recalls gray scale images. Each original image is encoded by a unique stable state of one of neural recurrent subnetworks. Comparing to Amari-Hopfield associative memory, our solution has no spurious states, is less sensitive to noise, and its network complexity is significantly lower. Computer simulations confirm that associative recall in this system for images of natural scenes is very robust. Colored additive and multiplicative noise with standard deviation up to =2 can be removed perfectly from normalized image. The same observations are valid for spiky noise distributed on up to 70% of image area. Even if we remove up to 95% pixels from the original image in deterministic or random way, still the network performs the correct association.  相似文献   

15.
We introduce a model of generalized Hebbian learning and retrieval in oscillatory neural networks modeling cortical areas such as hippocampus and olfactory cortex. Recent experiments have shown that synaptic plasticity depends on spike timing, especially on synapses from excitatory pyramidal cells, in hippocampus, and in sensory and cerebellar cortex. Here we study how such plasticity can be used to form memories and input representations when the neural dynamics are oscillatory, as is common in the brain (particularly in the hippocampus and olfactory cortex). Learning is assumed to occur in a phase of neural plasticity, in which the network is clamped to external teaching signals. By suitable manipulation of the nonlinearity of the neurons or the oscillation frequencies during learning, the model can be made, in a retrieval phase, either to categorize new inputs or to map them, in a continuous fashion, onto the space spanned by the imprinted patterns. We identify the first of these possibilities with the function of olfactory cortex and the second with the observed response characteristics of place cells in hippocampus. We investigate both kinds of networks analytically and by computer simulations, and we link the models with experimental findings, exploring, in particular, how the spike timing dependence of the synaptic plasticity constrains the computational function of the network and vice versa.  相似文献   

16.
This paper presents a new unsupervised attractor neural network, which, contrary to optimal linear associative memory models, is able to develop nonbipolar attractors as well as bipolar attractors. Moreover, the model is able to develop less spurious attractors and has a better recall performance under random noise than any other Hopfield type neural network. Those performances are obtained by a simple Hebbian/anti-Hebbian online learning rule that directly incorporates feedback from a specific nonlinear transmission rule. Several computer simulations show the model's distinguishing properties.  相似文献   

17.
Learning from neural control   总被引:4,自引:0,他引:4  
One of the amazing successes of biological systems is their ability to "learn by doing" and so adapt to their environment. In this paper, first, a deterministic learning mechanism is presented, by which an appropriately designed adaptive neural controller is capable of learning closed-loop system dynamics during tracking control to a periodic reference orbit. Among various neural network (NN) architectures, the localized radial basis function (RBF) network is employed. A property of persistence of excitation (PE) for RBF networks is established, and a partial PE condition of closed-loop signals, i.e., the PE condition of a regression subvector constructed out of the RBFs along a periodic state trajectory, is proven to be satisfied. Accurate NN approximation for closed-loop system dynamics is achieved in a local region along the periodic state trajectory, and a learning ability is implemented during a closed-loop feedback control process. Second, based on the deterministic learning mechanism, a neural learning control scheme is proposed which can effectively recall and reuse the learned knowledge to achieve closed-loop stability and improved control performance. The significance of this paper is that the presented deterministic learning mechanism and the neural learning control scheme provide elementary components toward the development of a biologically-plausible learning and control methodology. Simulation studies are included to demonstrate the effectiveness of the approach.  相似文献   

18.
A dynamically reconfigurable bit-serial systolic array implemented in 1.2-μm double-metal P-well CMOS is described. This processor array is proposed as the central computational unit in the Reconfigurable Systolic Array (RSA) neuro-computer and performance estimates suggest that a 64 IC system (containing a total of 1024 usable processors) can achieve a learning rate of 1134 MCUPS on the NETtalk problem. The architecture employs reconfiguration techniques for both fault-tolerance and functionality, and allows a number of neural network models (in both the recall and learning phases) from associative memory networks, supervised networks, and unsupervised networks to be supported.  相似文献   

19.
An associative neural network whose architecture is greatly influenced by biological data is described. The proposed neural network is significantly different in architecture and connectivity from previous models. Its emphasis is on high parallelism and modularity. The network connectivity is enriched by recurrent connections within the modules. Each module is, effectively, a Hopfield net. Connections within a module are plastic and are modified by associative learning. Connections between modules are fixed and thus not subject to learning. Although the network is tested with character recognition, it cannot be directly used as such for real-world applications. It must be incorporated as a module in a more complex structure. The architectural principles of the proposed network model can be used in the design of other modules of a whole system. Its architecture is such that it constitutes a good mathematical prototype to analyze the properties of modularity, recurrent connections, and feedback. The model does not make any contribution to the subject of learning in neural networks.  相似文献   

20.
The brain is not a huge fixed neural network, but a dynamic, changing neural network that continuously adapts to meet the demands of communication and computational needs. In classical neural networks approaches, particularly associative memory models, synapses are only adjusted during the training phase. After this phase, synapses are no longer adjusted. In this paper we describe a new dynamical model where synapses of the associative memory could be adjusted even after the training phase as a response to an input stimulus. We provide some propositions that guarantee perfect and robust recall of the fundamental set of associations. In addition, we describe the behavior of the proposed associative model under noisy versions of the patterns. At last, we present some experiments aimed to show the accuracy of the proposed model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号