首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
O Araki  K Aihara 《Neural computation》2001,13(12):2799-2822
Although various means of information representation in the cortex have been considered, the fundamental mechanism for such representation is not well understood. The relation between neural network dynamics and properties of information representation needs to be examined. We examined spatial pattern properties of mean firing rates and spatiotemporal spikes in an interconnected spiking neural network model. We found that whereas the spatiotemporal spike patterns are chaotic and unstable, the spatial patterns of mean firing rates (SPMFR) are steady and stable, reflecting the internal structure of synaptic weights. Interestingly, the chaotic instability contributes to fast stabilization of the SPMFR. Findings suggest that there are two types of network dynamics behind neuronal spiking: internally-driven dynamics and externally driven dynamics. When the internally driven dynamics dominate, spikes are relatively more chaotic and independent of external inputs; the SPMFR are steady and stable. When the externally driven dynamics dominate, the spiking patterns are relatively more dependent on the spatiotemporal structure of external inputs. These emergent properties of information representation imply that the brain may adopt a dual coding system. Recent experimental data suggest that internally driven and externally driven dynamics coexist and work together in the cortex.  相似文献   

2.
Polychronization: computation with spikes   总被引:10,自引:0,他引:10  
We present a minimal spiking network that can polychronize, that is, exhibit reproducible time-locked but not synchronous firing patterns with millisecond precision, as in synfire braids. The network consists of cortical spiking neurons with axonal conduction delays and spike-timing-dependent plasticity (STDP); a ready-to-use MATLAB code is included. It exhibits sleeplike oscillations, gamma (40 Hz) rhythms, conversion of firing rates to spike timings, and other interesting regimes. Due to the interplay between the delays and STDP, the spiking neurons spontaneously self-organize into groups and generate patterns of stereotypical polychronous activity. To our surprise, the number of coexisting polychronous groups far exceeds the number of neurons in the network, resulting in an unprecedented memory capacity of the system. We speculate on the significance of polychrony to the theory of neuronal group selection (TNGS, neural Darwinism), cognitive neural computations, binding and gamma rhythm, mechanisms of attention, and consciousness as "attention to memories."  相似文献   

3.
Dayhoff JE 《Neural computation》2007,19(9):2433-2467
We demonstrate a model in which synchronously firing ensembles of neurons are networked to produce computational results. Each ensemble is a group of biological integrate-and-fire spiking neurons, with probabilistic interconnections between groups. An analogy is drawn in which each individual processing unit of an artificial neural network corresponds to a neuronal group in a biological model. The activation value of a unit in the artificial neural network corresponds to the fraction of active neurons, synchronously firing, in a biological neuronal group. Weights of the artificial neural network correspond to the product of the interconnection density between groups, the group size of the presynaptic group, and the postsynaptic potential heights in the synchronous group model. All three of these parameters can modulate connection strengths between neuronal groups in the synchronous group models. We give an example of nonlinear classification (XOR) and a function approximation example in which the capability of the artificial neural network can be captured by a neural network model with biological integrate-and-fire neurons configured as a network of synchronously firing ensembles of such neurons. We point out that the general function approximation capability proven for feedforward artificial neural networks appears to be approximated by networks of neuronal groups that fire in synchrony, where the groups comprise integrate-and-fire neurons. We discuss the advantages of this type of model for biological systems, its possible learning mechanisms, and the associated timing relationships.  相似文献   

4.
脉冲神经网络是一种基于生物的网络模型,它的输入输出为具有时间特性的脉冲序列,其运行机制相比其他传统人工神经网络更加接近于生物神经网络。神经元之间通过脉冲序列传递信息,这些信息通过脉冲的激发时间编码能够更有效地发挥网络的学习性能。脉冲神经元的时间特性导致了其工作机制较为复杂,而spiking神经元的敏感性反映了当神经元输入发生扰动时输出的spike的变化情况,可以作为研究神经元内部工作机制的工具。不同于传统的神经网络,spiking神经元敏感性定义为输出脉冲的变化时刻个数与运行时间长度的比值,能直接反映出输入扰动对输出的影响程度。通过对不同形式的输入扰动敏感性的分析,可以看出spiking神经元的敏感性较为复杂,当全体突触发生扰动时,神经元为定值,而当部分突触发生扰动时,不同突触的扰动会导致不同大小的神经元敏感性。  相似文献   

5.
We propose a simple neural network model to understand the dynamics of temporal pulse coding. The model is composed of coincidence detector neurons with uniform synaptic efficacies and random pulse propagation delays. We also assume a global negative feedback mechanism which controls the network activity, leading to a fixed number of neurons firing within a certain time window. Due to this constraint, the network state becomes well defined and the dynamics equivalent to a piecewise nonlinear map. Numerical simulations of the model indicate that the latency of neuronal firing is crucial to the global network dynamics; when the timing of postsynaptic firing is less sensitive to perturbations in timing of presynaptic spikes, the network dynamics become stable and periodic, whereas increased sensitivity leads to instability and chaotic dynamics. Furthermore, we introduce a learning rule which decreases the Lyapunov exponent of an attractor and enlarges the basin of attraction.  相似文献   

6.
Action Recognition Using a Bio-Inspired Feedforward Spiking Network   总被引:2,自引:0,他引:2  
We propose a bio-inspired feedforward spiking network modeling two brain areas dedicated to motion (V1 and MT), and we show how the spiking output can be exploited in a computer vision application: action recognition. In order to analyze spike trains, we consider two characteristics of the neural code: mean firing rate of each neuron and synchrony between neurons. Interestingly, we show that they carry some relevant information for the action recognition application. We compare our results to Jhuang et al. (Proceedings of the 11th international conference on computer vision, pp. 1–8, 2007) on the Weizmann database. As a conclusion, we are convinced that spiking networks represent a powerful alternative framework for real vision applications that will benefit from recent advances in computational neuroscience.  相似文献   

7.
Spiking neural networks constitute a modern neural network paradigm that overlaps machine learning and computational neurosciences. Spiking neural networks use neuron models that possess a great degree of biological realism. The most realistic model of the neuron is the one created by Alan Lloyd Hodgkin and Andrew Huxley. However, the Hodgkin–Huxley model, while accurate, is computationally very inefficient. Eugene Izhikevich created a simplified neuron model based on the Hodgkin–Huxley equations. This model has better computational efficiency than the original proposed by Hodgkin and Huxley, and yet it can successfully reproduce all known firing patterns. However, there are not many articles dealing with implementations of this model for a functional neural network. This study presents a spiking neural network architecture that utilizes improved Izhikevich neurons with the purpose of evaluating its speed and efficiency. Since the field of spiking neural networks has reinvigorated the interest in biological plausibility, biological realism was an additional goal. The network is tested on the correct classification of logic gates (including XOR) and on the iris dataset. Results and possible improvements are also discussed.  相似文献   

8.
Liquid state machine is a recent concept whose aptitude for spatiotemporal pattern recognition tasks has already been demonstrated. It consists in stimulating an untrained spiking neural network with input streams, creating complex dynamics that form the liquid state. An external function, the readout, is trained to map the liquid states into the desired outputs. In this paper, different avenues are explored to improve the classification performance of the readout. First are compared the membrane potentials and the firing rates of the neurons as two different liquid state representations. We also propose a new liquid state representation based on the frequency components of short-time membrane potential signals. Tests on synthetic and real data reveal that the frequency-based representation gets higher recognition rates than by using membrane potential or firing rates. Finally, we show that the combination of the different liquid states can improve the classification performance on spatiotemporal data.  相似文献   

9.
相较于第1代和第2代神经网络,第3代神经网络的脉冲神经网络是一种更加接近于生物神经网络的模型,因此更具有生物可解释性和低功耗性。基于脉冲神经元模型,脉冲神经网络可以通过脉冲信号的形式模拟生物信号在神经网络中的传播,通过脉冲神经元的膜电位变化来发放脉冲序列,脉冲序列通过时空联合表达不仅传递了空间信息还传递了时间信息。当前面向模式识别任务的脉冲神经网络模型性能还不及深度学习,其中一个重要原因在于脉冲神经网络的学习方法不成熟,深度学习中神经网络的人工神经元是基于实数形式的输出,这使得其可以使用全局性的反向传播算法对深度神经网络的参数进行训练,脉冲序列是二值性的离散输出,这直接导致对脉冲神经网络的训练存在一定困难,如何对脉冲神经网络进行高效训练是一个具有挑战的研究问题。本文首先总结了脉冲神经网络研究领域中的相关学习算法,然后对其中主要的方法:直接监督学习、无监督学习的算法以及ANN2SNN的转换算法进行分析介绍,并对其中代表性的工作进行对比分析,最后基于对当前主流方法的总结,对未来更高效、更仿生的脉冲神经网络参数学习方法进行展望。  相似文献   

10.
Mikula S  Niebur E 《Neural computation》2008,20(11):2637-2661
We provide analytical solutions for mean firing rates and cross-correlations of coincidence detector neurons in recurrent networks with excitatory or inhibitory connectivity, with rate-modulated steady-state spiking inputs. We use discrete-time finite-state Markov chains to represent network state transition probabilities, which are subsequently used to derive exact analytical solutions for mean firing rates and cross-correlations. As illustrated in several examples, the method can be used for modeling cortical microcircuits and clarifying single-neuron and population coding mechanisms. We also demonstrate that increasing firing rates do not necessarily translate into increasing cross-correlations, though our results do support the contention that firing rates and cross-correlations are likely to be coupled. Our analytical solutions underscore the complexity of the relationship between firing rates and cross-correlations.  相似文献   

11.
The high-conductance state of cortical networks   总被引:3,自引:0,他引:3  
We studied the dynamics of large networks of spiking neurons with conductance-based (nonlinear) synapses and compared them to networks with current-based (linear) synapses. For systems with sparse and inhibition-dominated recurrent connectivity, weak external inputs induced asynchronous irregular firing at low rates. Membrane potentials fluctuated a few millivolts below threshold, and membrane conductances were increased by a factor 2 to 5 with respect to the resting state. This combination of parameters characterizes the ongoing spiking activity typically recorded in the cortex in vivo. Many aspects of the asynchronous irregular state in conductance-based networks could be sufficiently well characterized with a simple numerical mean field approach. In particular, it correctly predicted an intriguing property of conductance-based networks that does not appear to be shared by current-based models: they exhibit states of low-rate asynchronous irregular activity that persist for some period of time even in the absence of external inputs and without cortical pacemakers. Simulations of larger networks (up to 350,000 neurons) demonstrated that the survival time of self-sustained activity increases exponentially with network size.  相似文献   

12.
Fuzzy spiking neural P systems (in short, FSN P systems) are a novel class of distributed parallel computing models, which can model fuzzy production rules and apply their dynamic firing mechanism to achieve fuzzy reasoning. However, these systems lack adaptive/learning ability. Addressing this problem, a class of FSN P systems are proposed by introducing some new features, called adaptive fuzzy spiking neural P systems (in short, AFSN P systems). AFSN P systems not only can model weighted fuzzy production rules in fuzzy knowledge base but also can perform dynamically fuzzy reasoning. It is important to note that AFSN P systems have learning ability like neural networks. Based on neuron's firing mechanisms, a fuzzy reasoning algorithm and a learning algorithm are developed. Moreover, an example is included to illustrate the learning ability of AFSN P systems.  相似文献   

13.
Spike trains from cortical neurons show a high degree of irregularity, with coefficients of variation (CV) of their interspike interval (ISI) distribution close to or higher than one. It has been suggested that this irregularity might be a reflection of a particular dynamical state of the local cortical circuit in which excitation and inhibition balance each other. In this "balanced" state, the mean current to the neurons is below threshold, and firing is driven by current fluctuations, resulting in irregular Poisson-like spike trains. Recent data show that the degree of irregularity in neuronal spike trains recorded during the delay period of working memory experiments is the same for both low-activity states of a few Hz and for elevated, persistent activity states of a few tens of Hz. Since the difference between these persistent activity states cannot be due to external factors coming from sensory inputs, this suggests that the underlying network dynamics might support coexisting balanced states at different firing rates. We use mean field techniques to study the possible existence of multiple balanced steady states in recurrent networks of current-based leaky integrate-and-fire (LIF) neurons. To assess the degree of balance of a steady state, we extend existing mean-field theories so that not only the firing rate, but also the coefficient of variation of the interspike interval distribution of the neurons, are determined self-consistently. Depending on the connectivity parameters of the network, we find bistable solutions of different types. If the local recurrent connectivity is mainly excitatory, the two stable steady states differ mainly in the mean current to the neurons. In this case, the mean drive in the elevated persistent activity state is suprathreshold and typically characterized by low spiking irregularity. If the local recurrent excitatory and inhibitory drives are both large and nearly balanced, or even dominated by inhibition, two stable states coexist, both with subthreshold current drive. In this case, the spiking variability in both the resting state and the mnemonic persistent state is large, but the balance condition implies parameter fine-tuning. Since the degree of required fine-tuning increases with network size and, on the other hand, the size of the fluctuations in the afferent current to the cells increases for small networks, overall we find that fluctuation-driven persistent activity in the very simplified type of models we analyze is not a robust phenomenon. Possible implications of considering more realistic models are discussed.  相似文献   

14.
脉冲神经网络属于第三代人工神经网络,它是更具有生物可解释性的神经网络模型。随着人们对脉冲神经网络不断深入地研究,不仅神经元空间结构更为复杂,而且神经网络结构规模也随之增大。以串行计算的方式,难以在个人计算机上实现脉冲神经网络的模拟仿真。为此,设计了一个多核并行的脉冲神经网络模拟器,对神经元进行编码与映射,自定义路由表解决了多核间的网络通信,以时间驱动为策略,实现核与核间的动态同步,在模拟器上进行脉冲神经网络的并行计算。以Izhikevich脉冲神经元为模型,在模拟环境下进行仿真实验,结果表明多核并行计算相比传统的串行计算在效率方面约有两倍的提升,可为类似的脉冲神经网络的模拟并行化设计提供参考。  相似文献   

15.
Suemitsu Y  Nara S 《Neural computation》2004,16(9):1943-1957
Chaotic dynamics introduced into a neural network model is applied to solving two-dimensional mazes, which are ill-posed problems. A moving object moves from the position at t to t + 1 by simply defined motion function calculated from firing patterns of the neural network model at each time step t. We have embedded several prototype attractors that correspond to the simple motion of the object orienting toward several directions in two-dimensional space in our neural network model. Introducing chaotic dynamics into the network gives outputs sampled from intermediate state points between embedded attractors in a state space, and these dynamics enable the object to move in various directions. System parameter switching between a chaotic and an attractor regime in the state space of the neural network enables the object to move to a set target in a two-dimensional maze. Results of computer simulations show that the success rate for this method over 300 trials is higher than that of random walk. To investigate why the proposed method gives better performance, we calculate and discuss statistical data with respect to dynamical structure.  相似文献   

16.
Neurons that sustain elevated firing in the absence of stimuli have been found in many neural systems. In graded persistent activity, neurons can sustain firing at many levels, suggesting a widely found type of network dynamics in which networks can relax to any one of a continuum of stationary states. The reproduction of these findings in model networks of nonlinear neurons has turned out to be nontrivial. A particularly insightful model has been the "bump attractor," in which a continuous attractor emerges through an underlying symmetry in the network connectivity matrix. This model, however, cannot account for data in which the persistent firing of neurons is a monotonic -- rather than a bell-shaped -- function of a stored variable. Here, we show that the symmetry used in the bump attractor network can be employed to create a whole family of continuous attractor networks, including those with monotonic tuning. Our design is based on tuning the external inputs to networks that have a connectivity matrix with Toeplitz symmetry. In particular, we provide a complete analytical solution of a line attractor network with monotonic tuning and show that for many other networks, the numerical tuning of synaptic weights reduces to the computation of a single parameter.  相似文献   

17.
18.
Analyzing the dependencies between spike trains is an important step in understanding how neurons work in concert to represent biological signals. Usually this is done for pairs of neurons at a time using correlation-based techniques. Chornoboy, Schramm, and Karr (1988) proposed maximum likelihood methods for the simultaneous analysis of multiple pair-wise interactions among an ensemble of neurons. One of these methods is an iterative, continuous-time estimation algorithm for a network likelihood model formulated in terms of multiplicative conditional intensity functions. We devised a discrete-time version of this algorithm that includes a new, efficient computational strategy, a principled method to compute starting values, and a principled stopping criterion. In an analysis of simulated neural spike trains from ensembles of interacting neurons, the algorithm recovered the correct connectivity matrices and interaction parameters. In the analysis of spike trains from an ensemble of rat hippocampal place cells, the algorithm identified a connectivity matrix and interaction parameters consistent with the pattern of conjoined firing predicted by the overlap of the neurons' spatial receptive fields. These results suggest that the network likelihood model can be an efficient tool for the analysis of ensemble spiking activity.  相似文献   

19.
In usual spiking neural networks, the real world information is interpreted as spike time. A spiking neuron of the spiking neural network receives input vector of spike times, and activates a state function x(t) by increasing the time t until the value of x(t) reaches certain threshold value at a firing time t a . And t a is the output of the spiking neuron. In this paper we propose, and investigate the performance of, a modified spiking neuron, of which the output is a linear combination of the firing time t a and the derivative x??(t a ). The merit of the modified spiking neuron is shown by numerical experiments for solving some benchmark problems: The computational time of a modified spiking neuron is a little greater than that of a usual spiking neuron, but the accuracy of a modified spiking neuron is almost as good as a usual spiking neural network with a hidden layer.  相似文献   

20.
In the area of membrane computing, time-freeness has been defined as the ability for a timed membrane system to produce always the same result, independently of the execution times associated to the rules. In this paper, we use a similar idea in the framework of spiking neural P systems, a model inspired by the structure and the functioning of neural cells. In particular, we introduce stochastic spiking neural P systems where the time of firing for an enabled spiking rule is probabilistically chosen and we investigate when, and how, these probabilities can influence the ability of the systems to simulate, in a reliable way, universal machines, such as register machines.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号