首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Fiori S 《Neural computation》2005,17(4):779-838
The Hebbian paradigm is perhaps the best-known unsupervised learning theory in connectionism. It has inspired wide research activity in the artificial neural network field because it embodies some interesting properties such as locality and the capability of being applicable to the basic weight-and-sum structure of neuron models. The plain Hebbian principle, however, also presents some inherent theoretical limitations that make it impractical in most cases. Therefore, modifications of the basic Hebbian learning paradigm have been proposed over the past 20 years in order to design profitable signal and data processing algorithms. Such modifications led to the principal component analysis type class of learning rules along with their nonlinear extensions. The aim of this review is primarily to present part of the existing fragmented material in the field of principal component learning within a unified view and contextually to motivate and present extensions of previous works on Hebbian learning to complex-weighted linear neural networks. This work benefits from previous studies on linear signal decomposition by artificial neural networks, nonquadratic component optimization and reconstruction error definition, neural parameters adaptation by constrained optimization of learning criteria of complex-valued arguments, and orthonormality expression via the insertion of topological elements in the networks or by modifying the network learning criterion. In particular, the learning principles considered here and their analysis concern complex-valued principal/minor component/subspace linear/nonlinear rules for complex-weighted neural structures, both feedforward and laterally connected.  相似文献   

2.
Stereo matching using Hebbian learning   总被引:1,自引:0,他引:1  
This paper presents an approach to the local stereo matching problem using edge segments as features with several attributes. We have verified that the differences in attributes for the true matches cluster in a cloud around a center. The correspondence is established on the basis of the minimum distance criterion, computing the Mahalanobis distance between the difference of the attributes for a current pair of features and the cluster center (similarity constraint). We introduce a learning strategy based on the Hebbian Learning to get the best cluster center. A comparative analysis among methods without learning and with other learning strategies is illustrated.  相似文献   

3.
In this article we revisit the classical neuroscience paradigm of Hebbian learning. We find that it is difficult to achieve effective associative memory storage by Hebbian synaptic learning, since it requires network-level information at the synaptic level or sparse coding level. Effective learning can yet be achieved even with nonsparse patterns by a neuronal process that maintains a zero sum of the incoming synaptic efficacies. This weight correction improves the memory capacity of associative networks from an essentially bounded one to a memory capacity that scales linearly with network size. It also enables the effective storage of patterns with multiple levels of activity within a single network. Such neuronal weight correction can be successfully carried out by activity-dependent homeostasis of the neuron's synaptic efficacies, which was recently observed in cortical tissue. Thus, our findings suggest that associative learning by Hebbian synaptic learning should be accompanied by continuous remodeling of neuronally driven regulatory processes in the brain.  相似文献   

4.
Experiments have shown that the intrinsic excitability of neurons is not constant, but varies with physiological stimulation and during various learning paradigms. We study a model of Hebbian synaptic plasticity which is supplemented with intrinsic excitability changes. The excitability changes transcend time delays and provide a memory trace. Periods of selective enhanced excitability can thus assist in forming associations between temporally separated events, such as occur in trace conditioning. We demonstrate that simple bidirectional networks with excitability changes can learn trace conditioning paradigms.  相似文献   

5.
We propose a novel adaptive optimal control paradigm inspired by Hebbian covariance synaptic adaptation, a preeminent model of learning and memory as well as other malleable functions in the brain. The adaptation is driven by the spontaneous fluctuations in the system input and output, the covariance of which provides useful information about the changes in the system behavior. The control structure represents a novel form of associative reinforcement learning in which the reinforcement signal is implicitly given by the covariance of the input-output (I/O) signals. Theoretical foundations for the paradigm are derived using Lyapunov theory and are verified by means of computer simulations. The learning algorithm is applicable to a general class of nonlinear adaptive control problems. This on-line direct adaptive control method benefits from a computationally straightforward design, proof of convergence, no need for complete system identification, robustness to noise and uncertainties, and the ability to optimize a general performance criterion in terms of system states and control signals. These attractive properties of Hebbian feedback covariance learning control lend themselves to future investigations into the computational functions of synaptic plasticity in biological neurons.  相似文献   

6.
Studies Hebbian learning in linear neural networks with emphasis on the self-association information principle. This criterion, in one-layer networks, leads to the space of the principal components and can be generalized to arbitrary architectures. The self-association paradigm appears to be very promising because it accounts for the fundamental features of Hebbian synaptic learning and generalizes the various techniques proposed for adaptive principal component networks. The authors also include a set of simulations that compare various neural architectures and algorithms.  相似文献   

7.
Spike-timing-dependent Hebbian plasticity as temporal difference learning   总被引:1,自引:0,他引:1  
Rao RP  Sejnowski TJ 《Neural computation》2001,13(10):2221-2237
A spike-timing-dependent Hebbian mechanism governs the plasticity of recurrent excitatory synapses in the neocortex: synapses that are activated a few milliseconds before a postsynaptic spike are potentiated, while those that are activated a few milliseconds after are depressed. We show that such a mechanism can implement a form of temporal difference learning for prediction of input sequences. Using a biophysical model of a cortical neuron, we show that a temporal difference rule used in conjunction with dendritic backpropagating action potentials reproduces the temporally asymmetric window of Hebbian plasticity observed physio-logically. Furthermore, the size and shape of the window vary with the distance of the synapse from the soma. Using a simple example, we show how a spike-timing-based temporal difference learning rule can allow a network of neocortical neurons to predict an input a few milliseconds before the input's expected arrival.  相似文献   

8.
Most computational models of coding are based on a generative model according to which the feedback signal aims to reconstruct the visual scene as close as possible. We here explore an alternative model of feedback. It is derived from studies of attention and thus, probably more flexible with respect to attentive processing in higher brain areas. According to this model, feedback implements a gain increase of the feedforward signal. We use a dynamic model with presynaptic inhibition and Hebbian learning to simultaneously learn feedforward and feedback weights. The weights converge to localized, oriented, and bandpass filters similar as the ones found in V1. Due to presynaptic inhibition the model predicts the organization of receptive fields within the feedforward pathway, whereas feedback primarily serves to tune early visual processing according to the needs of the task.  相似文献   

9.
In this letter, we introduce a nonlinear hierarchic PCA type neural network with a simple architecture. The learning algorithm is a kind of nonlinear extension of the well-known Sanger's Generalized Hebbian Algorithm (GHA). It is derived from a nonlinear optimization criterion. Experiments with sinusoidal data show that the neurons become sensitive to different sinusoids. Standard linear PCA algorithms don't have such a separation property.  相似文献   

10.
黄鹤  张永亮 《控制与决策》2023,38(10):2815-2822
复值有限内存BFGS(CL-BFGS)算法能有效用于求解复数域的无约束优化问题,但其性能容易受到记忆尺度的影响.为了解决记忆尺度的选择问题,提出一种基于混合搜索方向的CL-BFGS算法.对于给定的记忆尺度候选集,采用滑动窗口法将其划分成有限个子集,将各子集元素作为记忆尺度计算得到一组混合方向,选择使目标函数值最小的混合方向作为当前迭代的搜索方向.在迭代过程中,采用混合搜索方向的策略有益于强化对最新曲率信息的利用,便于记忆尺度的选取,提高算法的收敛速度,所提出的CL-BFGS算法适用于多层前向复值神经网络的高效学习.最后通过在模式识别、非线性信道均衡和复函数逼近上的实验验证了基于混合搜索方向的CLBFGS算法能取得比一些已有算法更好的性能.  相似文献   

11.
We show how a Hopfield network with modifiable recurrent connections undergoing slow Hebbian learning can extract the underlying geometry of an input space. First, we use a slow and fast analysis to derive an averaged system whose dynamics derives from an energy function and therefore always converges to equilibrium points. The equilibria reflect the correlation structure of the inputs, a global object extracted through local recurrent interactions only. Second, we use numerical methods to illustrate how learning extracts the hidden geometrical structure of the inputs. Indeed, multidimensional scaling methods make it possible to project the final connectivity matrix onto a Euclidean distance matrix in a high-dimensional space, with the neurons labeled by spatial position within this space. The resulting network structure turns out to be roughly convolutional. The residual of the projection defines the nonconvolutional part of the connectivity, which is minimized in the process. Finally, we show how restricting the dimension of the space where the neurons live gives rise to patterns similar to cortical maps. We motivate this using an energy efficiency argument based on wire length minimization. Finally, we show how this approach leads to the emergence of ocular dominance or orientation columns in primary visual cortex via the self-organization of recurrent rather than feedforward connections. In addition, we establish that the nonconvolutional (or long-range) connectivity is patchy and is co-aligned in the case of orientation learning.  相似文献   

12.
Recent models of the oculomotor delayed response task have been based on the assumption that working memory is stored as a persistent activity state (a 'bump' state). The delay activity is maintained by a finely tuned synaptic weight matrix producing a line attractor. Here we present an alternative hypothesis, that fast Hebbian synaptic plasticity is the mechanism underlying working memory. A computational model demonstrates a working memory function that is more resistant to distractors and network inhomogeneity compared to previous models, and that is also capable of storing multiple memories.  相似文献   

13.
Xie X  Seung HS 《Neural computation》2003,15(2):441-454
Backpropagation and contrastive Hebbian learning are two methods of training networks with hidden neurons. Backpropagation computes an error signal for the output neurons and spreads it over the hidden neurons. Contrastive Hebbian learning involves clamping the output neurons at desired values and letting the effect spread through feedback connections over the entire network. To investigate the relationship between these two forms of learning, we consider a special case in which they are identical: a multilayer perceptron with linear output units, to which weak feedback connections have been added. In this case, the change in network state caused by clamping the output neurons turns out to be the same as the error signal spread by backpropagation, except for a scalar prefactor. This suggests that the functionality of backpropagation can be realized alternatively by a Hebbian-type learning algorithm, which is suitable for implementation in biological networks.  相似文献   

14.
R.  S.  N.  P. 《Neurocomputing》2009,72(16-18):3771
In a fully complex-valued feed-forward network, the convergence of the Complex-valued Back Propagation (CBP) learning algorithm depends on the choice of the activation function, learning sample distribution, minimization criterion, initial weights and the learning rate. The minimization criteria used in the existing versions of CBP learning algorithm in the literature do not approximate the phase of complex-valued output well in function approximation problems. The phase of a complex-valued output is critical in telecommunication and reconstruction and source localization problems in medical imaging applications. In this paper, the issues related to the convergence of complex-valued neural networks are clearly enumerated using a systematic sensitivity study on existing complex-valued neural networks. In addition, we also compare the performance of different types of split complex-valued neural networks. From the observations in the sensitivity analysis, we propose a new CBP learning algorithm with logarithmic performance index for a complex-valued neural network with exponential activation function. The proposed CBP learning algorithm directly minimizes both the magnitude and phase errors and also provides better convergence characteristics. Performance of the proposed scheme is evaluated using two synthetic complex-valued function approximation problems, the complex XOR problem, and a non-minimum phase equalization problem. Also, a comparative analysis on the convergence of the existing fully complex and split complex networks is presented.  相似文献   

15.
Fuzzy Grey Cognitive Maps (FGCM) is an innovative Grey System theory-based FCM extension. Grey systems have become a very effective theory for solving problems within environments with high uncertainty, under discrete small and incomplete data sets. In this study, the method of FGCMs and a proposed Hebbian-based learning algorithm for FGCMs were applied to a known reference chemical process problem, concerning a control process in chemical industry with two tanks, three valves, one heating element and two thermometers for each tank. The proposed mathematical formulation of FGCMs and the implementation of the NHL algorithm were analyzed and then successfully applied keeping the main constraints of the problem. A number of numerical experiments were conducted to validate the approach and verify the effectiveness. Also, the produced results were analyzed and compared with the results previously reported in the literature from the implementation of the FCMs and Nonlinear Hebbian learning algorithm. The advantages of FGCMs over conventional FCMs are their capabilities (i) to produce a length and greyness estimation at the outputs; the output greyness can be considered as an additional indicator of the quality of a decision, and (ii) to succeed desired behavior for the process system for every set of initial states, with and without Hebbian learning.  相似文献   

16.
《Neurocomputing》1999,24(1-3):163-171
Relative-minimization learning using additional random teacher signal points is proposed for behavior stabilization of complex-valued recurrent neural networks. Although recurrent neural networks can deal with time-sequential data, they tend to show an unstable behavior after a large number of iterations. The proposed method superimposes a type of basin upon a dynamics-determining hypersurface in an information vector field. It is shown that this process is equivalent to the relative minimization of the error function in the input-signal partial space. Experiments demonstrate that the relative-minimization learning suppresses positive values of Lyapunov exponents down to zero or negative. It is shown that the network behavior becomes stable successfully.  相似文献   

17.
It has recently been shown that orientation and retinotopic position, both of which are mapped in primary visual cortex, can show correlated jumps (Das & Gilbert, 1997). This is not consistent with maps generated by Kohonen's algorithm (Kohonen, 1982), where changes in mapped variables tend to be anticorrelated. We show that it is possible to obtain correlated jumps by introducing a Hebbian component (Hebb, 1949) into Kohonen's algorithm. This correspondents to a volume learning mechanism where synaptic facilitation depends not only on the spread of a signal from a maximally active neuron but also requires postsynaptic activity at a synapse. The maps generated by this algorithm show discontinuities across which both orientation and retinotopic position change rapidly, but these regions, which include the orientation singularities, are also aligned with the edges of ocular dominance columns, and this is not a realistic feature of cortical maps. We conclude that cortical maps are better modeled by standard, non-Hebbian volume learning, perhaps coupled with some other mechanism (e.g., that of Ernst, Pawelzik, Tsodyks, & Sejnowski, 1999) to produce receptive field shifts.  相似文献   

18.
The design of a product involves a process in which several different aspects are combined in order to obtain a final, suitable and optimum product. Designers must interact with different stakeholder groups, make decisions and complete the design process. In order to achieve this, different evaluation techniques are used. Depending on the chosen technique and on the field and environment in which each member of the design team was trained, each one of the members will consider one or several aspects of the design project but from a point of view or perspective in line with his/her particular professional background. As a result, all decisions which will affect the design process of the product are focused on these aspects and individual viewpoints. In this paper, an evaluation technique is proposed which allows one to take suitable decisions, taking into account all the factors and perspectives which affect the design process in the best way, searching for a balance among them in relation to the aims and interests of a specific design project. The development of this evaluation technique was inspired by the way in which neurons interact with one another in the brain and it has been based on the Hebbian learning rule for neural networks. Lastly, a real application of the proposed technique is presented to demonstrate its applicability in evaluating industrial designs.  相似文献   

19.
We study analytically a model of long-term synaptic plasticity where synaptic changes are triggered by presynaptic spikes, postsynaptic spikes, and the time differences between presynaptic and postsynaptic spikes. The changes due to correlated input and output spikes are quantified by means of a learning window. We show that plasticity can lead to an intrinsic stabilization of the mean firing rate of the postsynaptic neuron. Subtractive normalization of the synaptic weights (summed over all presynaptic inputs converging on a postsynaptic neuron) follows if, in addition, the mean input rates and the mean input correlations are identical at all synapses. If the integral over the learning window is positive, firing-rate stabilization requires a non-Hebbian component, whereas such a component is not needed if the integral of the learning window is negative. A negative integral corresponds to anti-Hebbian learning in a model with slowly varying firing rates. For spike-based learning, a strict distinction between Hebbian and anti-Hebbian rules is questionable since learning is driven by correlations on the timescale of the learning window. The correlations between presynaptic and postsynaptic firing are evaluated for a piecewise-linear Poisson model and for a noisy spiking neuron model with refractoriness. While a negative integral over the learning window leads to intrinsic rate stabilization, the positive part of the learning window picks up spatial and temporal correlations in the input.  相似文献   

20.
针对难以用机理模型准确描述的非线性系统,研究基于模糊认知网络(fuzzy cognitive networks,FCN)的非线性系统建模和参数辨识问题.首先,建立非线性系统的具有数值推理和模糊信息表达的模糊认知网络模型,利用包含节点、权值和反馈的有向图表示系统.其次,由于模型的精确性取决于权值参数,提出了一种带终端约束的非线性Hebbian学习算法(nonlinear Hebbian learning,NHL).该算法在权值的学习过程中引入了FCN模型中节点的系统实际值,在原更新机制的基础上,增加了包含反馈值与预测值差值的修正项,然后归一化得到最终权值迭代公式.该算法具有收敛速度快、学习结果精准等优点,解决了传统非线性Hebbian算法对初始值依赖性强的缺点.最后将所提出的方法运用到水箱控制系统,仿真结果说明了基于FCN的非线性Hebbian学习算法的有效性.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号