首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
In this article we revisit the classical neuroscience paradigm of Hebbian learning. We find that it is difficult to achieve effective associative memory storage by Hebbian synaptic learning, since it requires network-level information at the synaptic level or sparse coding level. Effective learning can yet be achieved even with nonsparse patterns by a neuronal process that maintains a zero sum of the incoming synaptic efficacies. This weight correction improves the memory capacity of associative networks from an essentially bounded one to a memory capacity that scales linearly with network size. It also enables the effective storage of patterns with multiple levels of activity within a single network. Such neuronal weight correction can be successfully carried out by activity-dependent homeostasis of the neuron's synaptic efficacies, which was recently observed in cortical tissue. Thus, our findings suggest that associative learning by Hebbian synaptic learning should be accompanied by continuous remodeling of neuronally driven regulatory processes in the brain.  相似文献   

2.
Reward-based learning in neural systems is challenging because a large number of parameters that affect network function must be optimized solely on the basis of a reward signal that indicates improved performance. Searching the parameter space for an optimal solution is particularly difficult if the network is large. We show that Hebbian forms of synaptic plasticity applied to synapses between a supervisor circuit and the network it is controlling can effectively reduce the dimension of the space of parameters being searched to support efficient reinforcement-based learning in large networks. The critical element is that the connections between the supervisor units and the network must be reciprocal. Once the appropriate connections have been set up by Hebbian plasticity, a reinforcement-based learning procedure leads to rapid learning in a function approximation task. Hebbian plasticity within the network being supervised ultimately allows the network to perform the task without input from the supervisor.  相似文献   

3.
Fiori S 《Neural computation》2005,17(4):779-838
The Hebbian paradigm is perhaps the best-known unsupervised learning theory in connectionism. It has inspired wide research activity in the artificial neural network field because it embodies some interesting properties such as locality and the capability of being applicable to the basic weight-and-sum structure of neuron models. The plain Hebbian principle, however, also presents some inherent theoretical limitations that make it impractical in most cases. Therefore, modifications of the basic Hebbian learning paradigm have been proposed over the past 20 years in order to design profitable signal and data processing algorithms. Such modifications led to the principal component analysis type class of learning rules along with their nonlinear extensions. The aim of this review is primarily to present part of the existing fragmented material in the field of principal component learning within a unified view and contextually to motivate and present extensions of previous works on Hebbian learning to complex-weighted linear neural networks. This work benefits from previous studies on linear signal decomposition by artificial neural networks, nonquadratic component optimization and reconstruction error definition, neural parameters adaptation by constrained optimization of learning criteria of complex-valued arguments, and orthonormality expression via the insertion of topological elements in the networks or by modifying the network learning criterion. In particular, the learning principles considered here and their analysis concern complex-valued principal/minor component/subspace linear/nonlinear rules for complex-weighted neural structures, both feedforward and laterally connected.  相似文献   

4.
We present a new learning algorithm that leverages oscillations in the strength of neural inhibition to train neural networks. Raising inhibition can be used to identify weak parts of target memories, which are then strengthened. Conversely, lowering inhibition can be used to identify competitors, which are then weakened. To update weights, we apply the Contrastive Hebbian Learning equation to successive time steps of the network. The sign of the weight change equation varies as a function of the phase of the inhibitory oscillation. We show that the learning algorithm can memorize large numbers of correlated input patterns without collapsing and that it shows good generalization to test patterns that do not exactly match studied patterns.  相似文献   

5.
Two important issues in computational modelling in cognitive neuroscience are: first, how to formally describe neuronal networks (i.e. biologically plausible models of the central nervous system), and second, how to analyse complex models, in particular, their dynamics and capacity to learn. We make progress towards these goals by presenting a communicating automata perspective on neuronal networks. Specifically, we describe neuronal networks and their biological mechanisms using Data-rich Communicating Automata, which extend classic automata theory with rich data types and communication. We use two case studies to illustrate our approach. In the first case study, we model a number of learning frameworks, which vary in respect of their biological detail, for instance the Backpropagation (BP) and the Generalized Recirculation (GeneRec) learning algorithms. We then used the SPIN model checker to investigate a number of behavioral properties of the neural learning algorithms. SPIN is a well-known model checker for reactive distributed systems, which has been successfully applied to many non-trivial problems. The verification results show that the biologically plausible GeneRec learning is less stable than BP learning. In the second case study, we presented a large scale (cognitive-level) neuronal network, which models an attentional spotlight mechanism in the visual system. A set of properties of this model was verified using Uppaal, a popular real-time model checker. The results show that the asynchronous processing supported by concurrency theory is not only a more biologically plausible way to model neural systems, but also provides a better performance in cognitive modelling of the brain than conventional artificial neural networks that use synchronous updates. Finally, we compared our approach with several other related theories that apply formal methods to cognitive modelling. In addition, the practical implications of the approach are discussed in the context of neuronal network based controllers.  相似文献   

6.
We demonstrate that spiking neural networks encoding information in the timing of single spikes are capable of computing and learning clusters from realistic data. We show how a spiking neural network based on spike-time coding and Hebbian learning can successfully perform unsupervised clustering on real-world data, and we demonstrate how temporal synchrony in a multilayer network can induce hierarchical clustering. We develop a temporal encoding of continuously valued data to obtain adjustable clustering capacity and precision with an efficient use of neurons: input variables are encoded in a population code by neurons with graded and overlapping sensitivity profiles. We also discuss methods for enhancing scale-sensitivity of the network and show how the induced synchronization of neurons within early RBF layers allows for the subsequent detection of complex clusters.  相似文献   

7.
There have been many computational models mimicking the visual cortex that are based on spatial adaptations of unsupervised neural networks. In this paper, we present a new model called neuronal cluster which includes spatial as well as temporal weights in its unified adaptation scheme. The “in-place” nature of the model is based on two biologically plausible learning rules, Hebbian rule and lateral inhibition. We present the mathematical demonstration that the temporal weights are derived from the delay in lateral inhibition. By training with the natural videos, this model can develop spatio–temporal features such as orientation selective cells, motion sensitive cells, and spatio–temporal complex cells. The unified nature of the adaption scheme allows us to construct a multilayered and task-independent attention selection network which uses the same learning rule for edge, motion, and color detection, and we can use this network to engage in attention selection in both static and dynamic scenes.   相似文献   

8.
Using standard results from the adaptive signal processing literature, we review the learning behavior of various constrained linear neural networks made up of anti-Hebbian synapses, where learning is driven by the criterion of minimizing the node information energy. We point out how simple learning rules of Hebbian type can provide fast self-organization, under rather wide connectivity constraints. We verify the results of the theory in a set of simulations.  相似文献   

9.
Studies Hebbian learning in linear neural networks with emphasis on the self-association information principle. This criterion, in one-layer networks, leads to the space of the principal components and can be generalized to arbitrary architectures. The self-association paradigm appears to be very promising because it accounts for the fundamental features of Hebbian synaptic learning and generalizes the various techniques proposed for adaptive principal component networks. The authors also include a set of simulations that compare various neural architectures and algorithms.  相似文献   

10.
Temporal album     
Transient synchronization has been used as a mechanism of recognizing auditory patterns using integrate-and-fire neural networks. We first extend the mechanism to vision tasks and investigate the role of spike dependent learning. We show that such a temporal Hebbian learning rule significantly improves accuracy of detection. We demonstrate how multiple patterns can be identified by a single pattern selective neuron and how a temporal album can be constructed. This principle may lead to multidimensional memories, where the capacity per neuron is considerably increased with accurate detection of spike synchronization.  相似文献   

11.
We present a hybrid learning method bridging the fields of recurrent neural networks, unsupervised Hebbian learning, vector quantization, and supervised learning to implement a sophisticated image and feature segmentation architecture. This architecture is based on the competitive layer model (CLM), a dynamic feature binding model, which is applicable on a wide range of perceptual grouping and segmentation problems. A predefined target segmentation can be achieved as attractor states of this linear threshold recurrent network, if the lateral weights are chosen by Hebbian learning. The weight matrix is given by the correlation matrix of special pattern vectors with a structure dependent on the target labeling. Generalization is achieved by applying vector quantization on pair-wise feature relations, like proximity and similarity, defined by external knowledge. We show the successful application of the method to a number of artificial test examples and a medical image segmentation problem of fluorescence microscope cell images.  相似文献   

12.
Computational models in cognitive neuroscience should ideally use biological properties and powerful computational principles to produce behavior consistent with psychological findings. Error-driven backpropagation is computationally powerful and has proven useful for modeling a range of psychological data but is not biologically plausible. Several approaches to implementing backpropagation in a biologically plausible fashion converge on the idea of using bidirectional activation propagation in interactive networks to convey error signals. This article demonstrates two main points about these error-driven interactive networks: (1) they generalize poorly due to attractor dynamics that interfere with the network's ability to produce novel combinatorial representations systematically in response to novel inputs, and (2) this generalization problem can be remedied by adding two widely used mechanistic principles, inhibitory competition and Hebbian learning, that can be independently motivated for a variety of biological, psychological, and computational reasons. Simulations using the Leabra algorithm, which combines the generalized recirculation (GeneRec), biologically plausible, error-driven learning algorithm with inhibitory competition and Hebbian learning, show that these mechanisms can result in good generalization in interactive networks. These results support the general conclusion that cognitive neuroscience models that incorporate the core mechanistic principles of interactivity, inhibitory competition, and error-driven and Hebbian learning satisfy a wider range of biological, psychological, and computational constraints than models employing a subset of these principles.  相似文献   

13.
Matsumoto N  Okada M 《Neural computation》2002,14(12):2883-2902
Recent biological experimental findings have shown that synaptic plasticity depends on the relative timing of the pre- and postsynaptic spikes. This determines whether long-term potentiation (LTP) or long-term depression (LTD) is induced. This synaptic plasticity has been called temporally asymmetric Hebbian plasticity (TAH). Many authors have numerically demonstrated that neural networks are capable of storing spatiotemporal patterns. However, the mathematical mechanism of the storage of spatiotemporal patterns is still unknown, and the effect of LTD is particularly unknown. In this article, we employ a simple neural network model and show that interference between LTP and LTD disappears in a sparse coding scheme. On the other hand, the covariance learning rule is known to be indispensable for the storage of sparse patterns. We also show that TAH has the same qualitative effect as the covariance rule when spatiotemporal patterns are embedded in the network.  相似文献   

14.
This paper proposes a linear neural network for principal component analysis whose weight vector lengths converge to the variances of the principal components in the input data. The neural network breaks the symmetry in its learning process by the differences in weight vector lengths and, as opposed to other linear neural networks described in literature, does not need to assume any asymmetries in its structure to extract the principal components. We prove the asymptotic stability of a stationary solution of the network's learning equation. Simulations show that the set of weight vectors converge to this solution. Comparison of convergence speeds shows that in the simulations the proposed neural network is about as fast as Sanger's generalized Hebbian algorithm (GHA) network, the weighted subspace rule network of Oja et al., and Xu's LMSER network (weighted linear version).  相似文献   

15.
Proulx and Begin (1995) recently explained the power of a learning rule that combines Hebbian and anti-Hebbian learning in unsupervised auto-associative neural networks. Combined with the brain-state-in-a-box transmission rule, this learning rule defines a new model of categorization: the Eidos model. To test this model, a simulated neural network, composed of 35 interconnected units, is subjected to an alphabetical characters recognition task. The results indicate the necessity of adding two parameters to the model: a restraining parameter and a forgetting parameter. The study shows the outstanding capacity of the model to categorize highly altered stimuli after a suitable learning process. Thus, the Eidos model seems to be an interesting option to achieve categorization in unsupervised neural networks.  相似文献   

16.
On the discrete-time dynamics of the basic Hebbian neural network node   总被引:3,自引:0,他引:3  
In this paper, the dynamical behavior of the basic node used for constructing Hebbian artificial neural networks (NNs) is analyzed. Hebbian NNs are employed in communications and signal processing applications, among others. They have been traditionally studied on a continuous-time formulation whose validity is justified via some analytical procedures that presume, among other hypotheses, a specific asymptotic behavior of the learning gain. The main contribution of this paper is the study of a deterministic discrete-time (DDT) formulation that characterizes the average evolution of the node, preserving the discrete-time form of the original network and gathering a more realistic behavior of the learning gain. The new deterministic discrete-time model provides some unstability results (critical for the case of large similar variance signals) which are drastically different to the ones known for the continuous-time formulation. Simulation examples support the presented results, illustrating the practical limitations of the basic Hebbian model.  相似文献   

17.
In this paper we are studying the optimization of Stochastic Hopfield neural network and the hybrid SOM–Hopfield neural network for the storage and recalling of fingerprint images. The feature extraction of these images has been performed using FFT, DWT and SOM. The feature vectors are stored in the Hopfield network with Hebbian learning and modified Pseudoinverse learning rules. The study explores the tolerance of Hopfield neural networks for reducing the effect of spurious minima in the recalling process by employing the Simulated annealing process. It is observed from the simulations that the capabilities of the Hopfield network can be sufficiently enhanced by making modifications in the feature extraction of the input data. DWT and SOM together can be used to significantly enhance the recall efficiency. The probability of error in recall in the form of spurious minima is minimized by adopting simulated annealing process in the pattern recalling process.  相似文献   

18.
Senn W  Fusi S 《Neural computation》2005,17(10):2106-2138
Learning in a neuronal network is often thought of as a linear superposition of synaptic modifications induced by individual stimuli. However, since biological synapses are naturally bounded, a linear superposition would cause fast forgetting of previously acquired memories. Here we show that this forgetting can be avoided by introducing additional constraints on the synaptic and neural dynamics. We consider Hebbian plasticity of excitatory synapses. A synapse is modified only if the postsynaptic response does not match the desired output. With this learning rule, the original memory performances with unbounded weights are regained, provided that (1) there is some global inhibition, (2) the learning rate is small, and (3) the neurons can discriminate small differences in the total synaptic input (e.g., by making the neuronal threshold small compared to the total postsynaptic input). We prove in the form of a generalized perceptron convergence theorem that under these constraints, a neuron learns to classify any linearly separable set of patterns, including a wide class of highly correlated random patterns. During the learning process, excitation becomes roughly balanced by inhibition, and the neuron classifies the patterns on the basis of small differences around this balance. The fact that synapses saturate has the additional benefit that nonlinearly separable patterns, such as similar patterns with contradicting outputs, eventually generate a subthreshold response, and therefore silence neurons that cannot provide any information.  相似文献   

19.
In a great variety of neuron models, neural inputs are combined using the summing operation. We introduce the concept of multiplicative neural networks that contain units that multiply their inputs instead of summing them and thus allow inputs to interact nonlinearly. The class of multiplicative neural networks comprises such widely known and well-studied network types as higher-order networks and product unit networks. We investigate the complexity of computing and learning for multiplicative neural networks. In particular, we derive upper and lower bounds on the Vapnik-Chervonenkis (VC) dimension and the pseudo-dimension for various types of networks with multiplicative units. As the most general case, we consider feedforward networks consisting of product and sigmoidal units, showing that their pseudo-dimension is bounded from above by a polynomial with the same order of magnitude as the currently best-known bound for purely sigmoidal networks. Moreover, we show that this bound holds even when the unit type, product or sigmoidal, may be learned. Crucial for these results are calculations of solution set components bounds for new network classes. As to lower bounds, we construct product unit networks of fixed depth with super-linear VC dimension. For sigmoidal networks of higher order, we establish polynomial bounds that, in contrast to previous results, do not involve any restriction of the network order. We further consider various classes of higher-order units, also known as sigma-pi units, that are characterized by connectivity constraints. In terms of these, we derive some asymptotically tight bounds. Multiplication plays an important role in both neural modeling of biological behavior and computing and learning with artificial neural networks. We briefly survey research in biology and in applications where multiplication is considered an essential computational element. The results we present here provide new tools for assessing the impact of multiplication on the computational power and the learning capabilities of neural networks.  相似文献   

20.
We introduce a model of generalized Hebbian learning and retrieval in oscillatory neural networks modeling cortical areas such as hippocampus and olfactory cortex. Recent experiments have shown that synaptic plasticity depends on spike timing, especially on synapses from excitatory pyramidal cells, in hippocampus, and in sensory and cerebellar cortex. Here we study how such plasticity can be used to form memories and input representations when the neural dynamics are oscillatory, as is common in the brain (particularly in the hippocampus and olfactory cortex). Learning is assumed to occur in a phase of neural plasticity, in which the network is clamped to external teaching signals. By suitable manipulation of the nonlinearity of the neurons or the oscillation frequencies during learning, the model can be made, in a retrieval phase, either to categorize new inputs or to map them, in a continuous fashion, onto the space spanned by the imprinted patterns. We identify the first of these possibilities with the function of olfactory cortex and the second with the observed response characteristics of place cells in hippocampus. We investigate both kinds of networks analytically and by computer simulations, and we link the models with experimental findings, exploring, in particular, how the spike timing dependence of the synaptic plasticity constrains the computational function of the network and vice versa.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号