首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Síma J  Sgall J 《Neural computation》2005,17(12):2635-2647
We study the computational complexity of training a single spiking neuron N with binary coded inputs and output that, in addition to adaptive weights and a threshold, has adjustable synaptic delays. A synchronization technique is introduced so that the results concerning the nonlearnability of spiking neurons with binary delays are generalized to arbitrary real-valued delays. In particular, the consistency problem for N with programmable weights, a threshold, and delays, and its approximation version are proven to be NP-complete. It follows that the spiking neurons with arbitrary synaptic delays are not properly PAC learnable and do not allow robust learning unless RP = NP. In addition, the representation problem for N, a question whether an n-variable Boolean function given in DNF (or as a disjunction of O(n) threshold gates) can be computed by a spiking neuron, is shown to be coNP-hard.  相似文献   

2.
相较于第1代和第2代神经网络,第3代神经网络的脉冲神经网络是一种更加接近于生物神经网络的模型,因此更具有生物可解释性和低功耗性。基于脉冲神经元模型,脉冲神经网络可以通过脉冲信号的形式模拟生物信号在神经网络中的传播,通过脉冲神经元的膜电位变化来发放脉冲序列,脉冲序列通过时空联合表达不仅传递了空间信息还传递了时间信息。当前面向模式识别任务的脉冲神经网络模型性能还不及深度学习,其中一个重要原因在于脉冲神经网络的学习方法不成熟,深度学习中神经网络的人工神经元是基于实数形式的输出,这使得其可以使用全局性的反向传播算法对深度神经网络的参数进行训练,脉冲序列是二值性的离散输出,这直接导致对脉冲神经网络的训练存在一定困难,如何对脉冲神经网络进行高效训练是一个具有挑战的研究问题。本文首先总结了脉冲神经网络研究领域中的相关学习算法,然后对其中主要的方法:直接监督学习、无监督学习的算法以及ANN2SNN的转换算法进行分析介绍,并对其中代表性的工作进行对比分析,最后基于对当前主流方法的总结,对未来更高效、更仿生的脉冲神经网络参数学习方法进行展望。  相似文献   

3.
随着深度学习在训练成本、泛化能力、可解释性以及可靠性等方面的不足日益突出,类脑计算已成为下一代人工智能的研究热点。脉冲神经网络能更好地模拟生物神经元的信息传递方式,且具有计算能力强、功耗低等特点,在模拟人脑学习、记忆、推理、判断和决策等复杂信息方面具有重要的潜力。本文对脉冲神经网络从以下几个方面进行总结:首先阐述脉冲神经网络的基本结构和工作原理;在结构优化方面,从脉冲神经网络的编码方式、脉冲神经元改进、拓扑结构、训练算法以及结合其他算法这5个方面进行总结;在训练算法方面,从基于反向传播方法、基于脉冲时序依赖可塑性规则方法、人工神经网络转脉冲神经网络和其他学习算法这4个方面进行总结;针对脉冲神经网络的不足与发展,从监督学习和无监督学习两方面剖析;最后,将脉冲神经网络应用到类脑计算和仿生任务中。本文对脉冲神经网络的基本原理、编码方式、网络结构和训练算法进行了系统归纳,对脉冲神经网络的研究发展具有一定的积极意义。  相似文献   

4.
针对脉冲神经元基于精确定时的多脉冲编码信息的特点,提出了一种基于卷积计算的多层脉冲神经网络监督学习的新算法。该算法应用核函数的卷积计算将离散的脉冲序列转换为连续函数,在多层前馈脉冲神经网络结构中,使用梯度下降的方法得到基于核函数卷积表示的学习规则,并用来调整神经元连接的突触权值。在实验部分,首先验证了该算法学习脉冲序列的效果,然后应用该算法对Iris数据集进行分类。结果显示,该算法能够实现脉冲序列复杂时空模式的学习,对非线性模式分类问题具有较高的分类正确率。  相似文献   

5.
We study pulse-coupled neural networks that satisfy only two assumptions: each isolated neuron fires periodically, and the neurons are weakly connected. Each such network can be transformed by a piece-wise continuous change of variables into a phase model, whose synchronization behavior and oscillatory associative properties are easier to analyze and understand. Using the phase model, we can predict whether a given pulse-coupled network has oscillatory associative memory, or what minimal adjustments should be made so that it can acquire memory. In the search for such minimal adjustments we obtain a large class of simple pulse-coupled neural networks that ran memorize and reproduce synchronized temporal patterns the same way a Hopfield network does with static patterns. The learning occurs via modification of synaptic weights and/or synaptic transmission delays.  相似文献   

6.
In this paper, we describe a new Synaptic Plasticity Activity Rule (SAPR) developed for use in networks of spiking neurons. Such networks can be used for simulations of physiological experiments as well as for other computations like image analysis. Most synaptic plasticity rules use artificially defined functions to modify synaptic connection strengths. In contrast, our rule makes use of the existing postsynaptic potential values to compute the value of adjustment. The network of spiking neurons we consider consists of excitatory and inhibitory neurons. Each neuron is implemented as an integrate-and-fire model that accurately mimics the behavior of biological neurons. To test performance of our new plasticity rule we designed a model of a biologically-inspired signal processing system, and used it for object detection in eye images of diabetic retinopathy patients, and lung images of cystic fibrosis patients. The results show that the network detects the edges of objects within an image, essentially segmenting it. Our ultimate goal, however, is not the development of an image segmentation tool that would be more efficient than nonbiological algorithms, but developing a physiologically correct neural network model that could be applied to a wide range of neurological experiments. We decided to validate the SAPR by using it in a network of spiking neurons for image segmentation because it is easy to visually assess the results. An important thing is that image segmentation is done in an entirely unsupervised way.  相似文献   

7.
The speed and reliability of mammalian perception indicate that cortical computations can rely on very few action potentials per involved neuron. Together with the stochasticity of single-spike events in cortex, this appears to imply that large populations of redundant neurons are needed for rapid computations with action potentials. Here we demonstrate that very fast and precise computations can be realized also in small networks of stochastically spiking neurons. We present a generative network model for which we derive biologically plausible algorithms that perform spike-by-spike updates of the neuron's internal states and adaptation of its synaptic weights from maximizing the likelihood of the observed spike patterns. Paradigmatic computational tasks demonstrate the online performance and learning efficiency of our framework. The potential relevance of our approach as a model for cortical computation is discussed.  相似文献   

8.
脉冲神经网络是一种基于生物的网络模型,它的输入输出为具有时间特性的脉冲序列,其运行机制相比其他传统人工神经网络更加接近于生物神经网络。神经元之间通过脉冲序列传递信息,这些信息通过脉冲的激发时间编码能够更有效地发挥网络的学习性能。脉冲神经元的时间特性导致了其工作机制较为复杂,而spiking神经元的敏感性反映了当神经元输入发生扰动时输出的spike的变化情况,可以作为研究神经元内部工作机制的工具。不同于传统的神经网络,spiking神经元敏感性定义为输出脉冲的变化时刻个数与运行时间长度的比值,能直接反映出输入扰动对输出的影响程度。通过对不同形式的输入扰动敏感性的分析,可以看出spiking神经元的敏感性较为复杂,当全体突触发生扰动时,神经元为定值,而当部分突触发生扰动时,不同突触的扰动会导致不同大小的神经元敏感性。  相似文献   

9.
Stochastic dynamics of a finite-size spiking neural network   总被引:4,自引:0,他引:4  
Soula H  Chow CC 《Neural computation》2007,19(12):3262-3292
We present a simple Markov model of spiking neural dynamics that can be analytically solved to characterize the stochastic dynamics of a finite-size spiking neural network. We give closed-form estimates for the equilibrium distribution, mean rate, variance, and autocorrelation function of the network activity. The model is applicable to any network where the probability of firing of a neuron in the network depends on only the number of neurons that fired in a previous temporal epoch. Networks with statistically homogeneous connectivity and membrane and synaptic time constants that are not excessively long could satisfy these conditions. Our model completely accounts for the size of the network and correlations in the firing activity. It also allows us to examine how the network dynamics can deviate from mean field theory. We show that the model and solutions are applicable to spiking neural networks in biophysically plausible parameter regimes.  相似文献   

10.
We study analytically a model of long-term synaptic plasticity where synaptic changes are triggered by presynaptic spikes, postsynaptic spikes, and the time differences between presynaptic and postsynaptic spikes. The changes due to correlated input and output spikes are quantified by means of a learning window. We show that plasticity can lead to an intrinsic stabilization of the mean firing rate of the postsynaptic neuron. Subtractive normalization of the synaptic weights (summed over all presynaptic inputs converging on a postsynaptic neuron) follows if, in addition, the mean input rates and the mean input correlations are identical at all synapses. If the integral over the learning window is positive, firing-rate stabilization requires a non-Hebbian component, whereas such a component is not needed if the integral of the learning window is negative. A negative integral corresponds to anti-Hebbian learning in a model with slowly varying firing rates. For spike-based learning, a strict distinction between Hebbian and anti-Hebbian rules is questionable since learning is driven by correlations on the timescale of the learning window. The correlations between presynaptic and postsynaptic firing are evaluated for a piecewise-linear Poisson model and for a noisy spiking neuron model with refractoriness. While a negative integral over the learning window leads to intrinsic rate stabilization, the positive part of the learning window picks up spatial and temporal correlations in the input.  相似文献   

11.
This letter introduces a biologically inspired very simple spiking neuron model. The model retains only crucial aspects of biological neurons: a network of time-delayed weighted connections to other neurons, a threshold-based generation of action potentials, action potential frequency proportional to stimulus intensity, and interneuron communication that occurs with time-varying potentials that last longer than the associated action potentials. The key difference between this model and existing spiking neuron models is its great simplicity: it is basically a collection of linear and discontinuous functions with no differential equations to solve. The model's ability to operate in a complex network was tested by using it as a basis of a network implementing a hypothetical echolocation system. The system consists of an emitter and two receivers. The outputs of the receivers are connected to a network of spiking neurons (using the proposed model) to form a detection grid that acts as a map of object locations in space. The network uses differences in the arrival times of the signals to determine the azimuthal angle of the source and time of flight to calculate the distance. The activation patterns observed indicate that for a network of spiking neurons, which uses only time delays to determine source locations, the spatial discrimination varies with the number and relative spacing of objects. These results are similar to those observed in animals that use echolocation.  相似文献   

12.
Real-time computing platform for spiking neurons (RT-spike)   总被引:1,自引:0,他引:1  
A computing platform is described for simulating arbitrary networks of spiking neurons in real time. A hybrid computing scheme is adopted that uses both software and hardware components to manage the tradeoff between flexibility and computational power; the neuron model is implemented in hardware and the network model and the learning are implemented in software. The incremental transition of the software components into hardware is supported. We focus on a spike response model (SRM) for a neuron where the synapses are modeled as input-driven conductances. The temporal dynamics of the synaptic integration process are modeled with a synaptic time constant that results in a gradual injection of charge. This type of model is computationally expensive and is not easily amenable to existing software-based event-driven approaches. As an alternative we have designed an efficient time-based computing architecture in hardware, where the different stages of the neuron model are processed in parallel. Further improvements occur by computing multiple neurons in parallel using multiple processing units. This design is tested using reconfigurable hardware and its scalability and performance evaluated. Our overall goal is to investigate biologically realistic models for the real-time control of robots operating within closed action-perception loops, and so we evaluate the performance of the system on simulating a model of the cerebellum where the emulation of the temporal dynamics of the synaptic integration process is important.  相似文献   

13.
为解决脉冲神经网络训练困难的问题,基于仿生学思路,提出脉冲神经网络的权值学习算法和结构学习算法,设计一种含有卷积结构的脉冲神经网络模型,搭建适合脉冲神经网络的软件仿真平台。实验结果表明,权值学习算法训练的网络对MNIST数据集识别准确率能够达到84.12%,具备良好的快速收敛能力和低功耗特点;结构学习算法能够自动生成网络结构,具有高度生物相似性。  相似文献   

14.
In most neural network models, synapses are treated as static weights that change only with the slow time scales of learning. It is well known, however, that synapses are highly dynamic and show use-dependent plasticity over a wide range of time scales. Moreover, synaptic transmission is an inherently stochastic process: a spike arriving at a presynaptic terminal triggers the release of a vesicle of neurotransmitter from a release site with a probability that can be much less than one. We consider a simple model for dynamic stochastic synapses that can easily be integrated into common models for networks of integrate-and-fire neurons (spiking neurons). The parameters of this model have direct interpretations in terms of synaptic physiology. We investigate the consequences of the model for computing with individual spikes and demonstrate through rigorous theoretical results that the computational power of the network is increased through the use of dynamic synapses.  相似文献   

15.
We present a spiking neuron model that allows for an analytic calculation of the correlations between pre- and postsynaptic spikes. The neuron model is a generalization of the integrate-and-fire model and equipped with a probabilistic spike-triggering mechanism. We show that under certain biologically plausible conditions, pre- and postsynaptic spike trains can be described simultaneously as an inhomogeneous Poisson process. Inspired by experimental findings, we develop a model for synaptic long-term plasticity that relies on the relative timing of pre- and post-synaptic action potentials. Being given an input statistics, we compute the stationary synaptic weights that result from the temporal correlations between the pre- and postsynaptic spikes. By means of both analytic calculations and computer simulations, we show that such a mechanism of synaptic plasticity is able to strengthen those input synapses that convey precisely timed spikes at the expense of synapses that deliver spikes with a broad temporal distribution. This may be of vital importance for any kind of information processing based on spiking neurons and temporal coding.  相似文献   

16.
θ相移是在大鼠海马中发现的位置细胞放电的特殊模式.随着大鼠在某个位置场中行进,相应位置细胞发放脉冲的相位(相对于局部电位中的θ节律)会逐渐提前.一些学者认为,该现象可以将大鼠在运动中所经过的一系列位置场的顺序编码成时间上压缩,并且多次重复出现的脉冲模式,因此可以促进大鼠对其在运动中经过的空间位置的顺序的记忆.本文建立了一个模型,对该现象进行了研究.首先,本文建立了能够产生θ相移现象的单个海马神经元模型.这一模型建立在HarrisKD等及MageeJC的电生理实验研究的基础上,根据神经元真实的生理特性来建模.并且以整合与发放的脉冲神经元模型取代H—H模型,大大简化了计算量.而模拟结果又能较好的重现实验中真实神经元的表现.为了研究θ相移对空间位置顺序记忆的作用,在单神经元模型的基础上,又建立了一个基于STDP的学习型神经网络.通过对网络的研究发现,空间位置顺序的信息在模拟中只要输入一次,就可以使该网络对这一顺序形成一定程度的记忆,并且有一定的比率能达到很高的准确率.而如果在单神经元模型中去除θ相移功能,则在单次学习过程中,根本无法形成对空间位置顺序的记忆,代表各个空间位置的神经元几乎同时发放,基本上不能代表顺序信息.  相似文献   

17.
Spiking neurons are very flexible computational modules, which can implement with different values of their adjustable synaptic parameters an enormous variety of different transformations F from input spike trains to output spike trains. We examine in this letter the question to what extent a spiking neuron with biologically realistic models for dynamic synapses can be taught via spike-timing-dependent plasticity (STDP) to implement a given transformation F. We consider a supervised learning paradigm where during training, the output of the neuron is clamped to the target signal (teacher forcing). The well-known perceptron convergence theorem asserts the convergence of a simple supervised learning algorithm for drastically simplified neuron models (McCulloch-Pitts neurons). We show that in contrast to the perceptron convergence theorem, no theoretical guarantee can be given for the convergence of STDP with teacher forcing that holds for arbitrary input spike patterns. On the other hand, we prove that average case versions of the perceptron convergence theorem hold for STDP in the case of uncorrelated and correlated Poisson input spike trains and simple models for spiking neurons. For a wide class of cross-correlation functions of the input spike trains, the resulting necessary and sufficient condition can be formulated in terms of linear separability, analogously as the well-known condition of learnability by perceptrons. However, the linear separability criterion has to be applied here to the columns of the correlation matrix of the Poisson input. We demonstrate through extensive computer simulations that the theoretically predicted convergence of STDP with teacher forcing also holds for more realistic models for neurons, dynamic synapses, and more general input distributions. In addition, we show through computer simulations that these positive learning results hold not only for the common interpretation of STDP, where STDP changes the weights of synapses, but also for a more realistic interpretation suggested by experimental data where STDP modulates the initial release probability of dynamic synapses.  相似文献   

18.
Spiking neural networks constitute a modern neural network paradigm that overlaps machine learning and computational neurosciences. Spiking neural networks use neuron models that possess a great degree of biological realism. The most realistic model of the neuron is the one created by Alan Lloyd Hodgkin and Andrew Huxley. However, the Hodgkin–Huxley model, while accurate, is computationally very inefficient. Eugene Izhikevich created a simplified neuron model based on the Hodgkin–Huxley equations. This model has better computational efficiency than the original proposed by Hodgkin and Huxley, and yet it can successfully reproduce all known firing patterns. However, there are not many articles dealing with implementations of this model for a functional neural network. This study presents a spiking neural network architecture that utilizes improved Izhikevich neurons with the purpose of evaluating its speed and efficiency. Since the field of spiking neural networks has reinvigorated the interest in biological plausibility, biological realism was an additional goal. The network is tested on the correct classification of logic gates (including XOR) and on the iris dataset. Results and possible improvements are also discussed.  相似文献   

19.
Lo JT 《Neural computation》2011,23(10):2626-2682
A biologically plausible low-order model (LOM) of biological neural networks is proposed. LOM is a recurrent hierarchical network of models of dendritic nodes and trees; spiking and nonspiking neurons; unsupervised, supervised covariance and accumulative learning mechanisms; feedback connections; and a scheme for maximal generalization. These component models are motivated and necessitated by making LOM learn and retrieve easily without differentiation, optimization, or iteration, and cluster, detect, and recognize multiple and hierarchical corrupted, distorted, and occluded temporal and spatial patterns. Four models of dendritic nodes are given that are all described as a hyperbolic polynomial that acts like an exclusive-OR logic gate when the model dendritic nodes input two binary digits. A model dendritic encoder that is a network of model dendritic nodes encodes its inputs such that the resultant codes have an orthogonality property. Such codes are stored in synapses by unsupervised covariance learning, supervised covariance learning, or unsupervised accumulative learning, depending on the type of postsynaptic neuron. A masking matrix for a dendritic tree, whose upper part comprises model dendritic encoders, enables maximal generalization on corrupted, distorted, and occluded data. It is a mathematical organization and idealization of dendritic trees with overlapped and nested input vectors. A model nonspiking neuron transmits inhibitory graded signals to modulate its neighboring model spiking neurons. Model spiking neurons evaluate the subjective probability distribution (SPD) of the labels of the inputs to model dendritic encoders and generate spike trains with such SPDs as firing rates. Feedback connections from the same or higher layers with different numbers of unit-delay devices reflect different signal traveling times, enabling LOM to fully utilize temporally and spatially associated information. Biological plausibility of the component models is discussed. Numerical examples are given to demonstrate how LOM operates in retrieving, generalizing, and unsupervised and supervised learning.  相似文献   

20.
Optoelectronic spiking neuron that is based on bispin-device is described. The neuron has separate optical inputs for excitatory and inhibitory signals, which are represented with pulses of single polarity. Experimental data, which demonstrates similarity in form of output pulses and set of functions of the suggested neuron and a biological one is given. An example of hardware implementation of optoelectronic pulsed neural network (PNN) that is based on proposed neurons is described. Main elements of the neural network are a line of pulsed neurons and a connection array, part of which is made as a spatial light modulator (SLM) with memory. Usage of SLM allows modification of weights of connections in the learning process of the network. It is possible to create adaptive (capable of additional learning and relearning) optoelectronic PNNs with about 2000 neurons.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号