首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 281 毫秒
1.
Dayhoff JE 《Neural computation》2007,19(9):2433-2467
We demonstrate a model in which synchronously firing ensembles of neurons are networked to produce computational results. Each ensemble is a group of biological integrate-and-fire spiking neurons, with probabilistic interconnections between groups. An analogy is drawn in which each individual processing unit of an artificial neural network corresponds to a neuronal group in a biological model. The activation value of a unit in the artificial neural network corresponds to the fraction of active neurons, synchronously firing, in a biological neuronal group. Weights of the artificial neural network correspond to the product of the interconnection density between groups, the group size of the presynaptic group, and the postsynaptic potential heights in the synchronous group model. All three of these parameters can modulate connection strengths between neuronal groups in the synchronous group models. We give an example of nonlinear classification (XOR) and a function approximation example in which the capability of the artificial neural network can be captured by a neural network model with biological integrate-and-fire neurons configured as a network of synchronously firing ensembles of such neurons. We point out that the general function approximation capability proven for feedforward artificial neural networks appears to be approximated by networks of neuronal groups that fire in synchrony, where the groups comprise integrate-and-fire neurons. We discuss the advantages of this type of model for biological systems, its possible learning mechanisms, and the associated timing relationships.  相似文献   

2.
Stochastic dynamics of a finite-size spiking neural network   总被引:4,自引:0,他引:4  
Soula H  Chow CC 《Neural computation》2007,19(12):3262-3292
We present a simple Markov model of spiking neural dynamics that can be analytically solved to characterize the stochastic dynamics of a finite-size spiking neural network. We give closed-form estimates for the equilibrium distribution, mean rate, variance, and autocorrelation function of the network activity. The model is applicable to any network where the probability of firing of a neuron in the network depends on only the number of neurons that fired in a previous temporal epoch. Networks with statistically homogeneous connectivity and membrane and synaptic time constants that are not excessively long could satisfy these conditions. Our model completely accounts for the size of the network and correlations in the firing activity. It also allows us to examine how the network dynamics can deviate from mean field theory. We show that the model and solutions are applicable to spiking neural networks in biophysically plausible parameter regimes.  相似文献   

3.
4.
A network of neurons with dendritic dynamics is analyzed in this paper. Two stable regimes of the complete network can coexist under continuous weak stimulation: the oscillatory synchronized regime and the quiet regime, where all neurons stop firing completely. It is shown that a single control pulse can calm a single neuron as well as the whole network, and the network stays in the quiet regime as long as the weak stimulation is turned on. It is also demonstrated that the same control technique can be effectively used to calm a random Erdös–Renyi network of dendritic neurons. Moreover, it appears that the random network of dendritic neurons can evolve into the quiet regime without applying any external pulse-based control techniques.  相似文献   

5.
This paper presents a novel iterative thresholding segmentation method based on a modified pulse coupled neural network (PCNN) for partitioning pixels carefully into a corresponding cluster. In the modified model, we initially simplify the two inputs of the original PCNN, and then construct a global neural threshold instead of the original threshold under the specified condition that the neuron will keep on firing once it begins. This threshold is shown to be the cluster center of a region in which corresponding neurons fire, and which can be adaptively updated as soon as neighboring neurons are captured. We then propose a method for automatically adjusting the linking coefficient based on the minimum weighted center distance function. Through iteration, the threshold can be made to converge at the possible real center of object region, thus ensuring that the final result will be obtained automatically. Finally, experiments on several infrared images demonstrate the efficiency of our proposed model. Moreover, based on comparisons with two efficient thresholding methods, a number of PCNN-based models show that our proposed model can segment images with high performance.  相似文献   

6.
We present a dynamical theory of integrate-and-fire neurons with strong synaptic coupling. We show how phase-locked states that are stable in the weak coupling regime can destabilize as the coupling is increased, leading to states characterized by spatiotemporal variations in the interspike intervals (ISIs). The dynamics is compared with that of a corresponding network of analog neurons in which the outputs of the neurons are taken to be mean firing rates. A fundamental result is that for slow interactions, there is good agreement between the two models (on an appropriately defined timescale). Various examples of desynchronization in the strong coupling regime are presented. First, a globally coupled network of identical neurons with strong inhibitory coupling is shown to exhibit oscillator death in which some of the neurons suppress the activity of others. However, the stability of the synchronous state persists for very large networks and fast synapses. Second, an asymmetric network with a mixture of excitation and inhibition is shown to exhibit periodic bursting patterns. Finally, a one-dimensional network of neurons with long-range interactions is shown to desynchronize to a state with a spatially periodic pattern of mean firing rates across the network. This is modulated by deterministic fluctuations of the instantaneous firing rate whose size is an increasing function of the speed of synaptic response.  相似文献   

7.
Neurons that sustain elevated firing in the absence of stimuli have been found in many neural systems. In graded persistent activity, neurons can sustain firing at many levels, suggesting a widely found type of network dynamics in which networks can relax to any one of a continuum of stationary states. The reproduction of these findings in model networks of nonlinear neurons has turned out to be nontrivial. A particularly insightful model has been the "bump attractor," in which a continuous attractor emerges through an underlying symmetry in the network connectivity matrix. This model, however, cannot account for data in which the persistent firing of neurons is a monotonic -- rather than a bell-shaped -- function of a stored variable. Here, we show that the symmetry used in the bump attractor network can be employed to create a whole family of continuous attractor networks, including those with monotonic tuning. Our design is based on tuning the external inputs to networks that have a connectivity matrix with Toeplitz symmetry. In particular, we provide a complete analytical solution of a line attractor network with monotonic tuning and show that for many other networks, the numerical tuning of synaptic weights reduces to the computation of a single parameter.  相似文献   

8.
Spiking neural systems are based on biologically inspired neural models of computation since they take into account the precise timing of spike events and therefore are suitable to analyze dynamical aspects of neuronal signal transmission. These systems gained increasing interest because they are more sophisticated than simple neuron models found in artificial neural systems; they are closer to biophysical models of neurons, synapses, and related elements and their synchronized firing of neuronal assemblies could serve the brain as a code for feature binding and pattern segmentation. The simulations are designed to exemplify certain properties of the olfactory bulb (OB) dynamics and are based on an extension of the integrate-and-fire (IF) neuron, and the idea of locally coupled excitation and inhibition cells. We introduce the background theory to making an appropriate choice of model parameters. The following two forms of connectivity offering certain computational and analytical advantages, either through symmetry or statistical properties in the study of OB dynamics have been used:
  • all-to-all coupling,
  • receptive field style coupling.
Our simulations showed that the inter-neuron transmission delay controls the size of spatial variations of the input and also smoothes the network response. Our IF extended model proves to be a useful basis from which we can study more sophisticated features as complex pattern formation, and global stability and chaos of OB dynamics.  相似文献   

9.
The synchronous firing of neurons in a pulse-coupled neural network composed of excitatory and inhibitory neurons is analyzed. The neurons are connected by both chemical synapses and electrical synapses among the inhibitory neurons. When electrical synapses are introduced, periodically synchronized firing as well as chaotically synchronized firing is widely observed. Moreover, we find stochastic synchrony where the ensemble-averaged dynamics shows synchronization in the network but each neuron has a low firing rate and the firing of the neurons seems to be stochastic. Stochastic synchrony of chaos corresponding to a chaotic attractor is also found.  相似文献   

10.
The synchrony of neurons in extrastriate visual cortex is modulated by selective attention even when there are only small changes in firing rate (Fries, Reynolds, Rorie, & Desimone, 2001). We used Hodgkin-Huxley type models of cortical neurons to investigate the mechanism by which the degree of synchrony can be modulated independently of changes in firing rates. The synchrony of local networks of model cortical interneurons interacting through GABA(A) synapses was modulated on a fast timescale by selectively activating a fraction of the interneurons. The activated interneurons became rapidly synchronized and suppressed the activity of the other neurons in the network but only if the network was in a restricted range of balanced synaptic background activity. During stronger background activity, the network did not synchronize, and for weaker background activity, the network synchronized but did not return to an asynchronous state after synchronizing. The inhibitory output of the network blocked the activity of pyramidal neurons during asynchronous network activity, and during synchronous network activity, it enhanced the impact of the stimulus-related activity of pyramidal cells on receiving cortical areas (Salinas & Sejnowski, 2001). Synchrony by competition provides a mechanism for controlling synchrony with minor alterations in rate, which could be useful for information processing. Because traditional methods such as cross-correlation and the spike field coherence require several hundred milliseconds of recordings and cannot measure rapid changes in the degree of synchrony, we introduced a new method to detect rapid changes in the degree of coincidence and precision of spike timing.  相似文献   

11.
Population density methods provide promising time-saving alternatives to direct Monte Carlo simulations of neuronal network activity, in which one tracks the state of thousands of individual neurons and synapses. A population density method has been found to be roughly a hundred times faster than direct simulation for various test networks of integrate-and-fire model neurons with instantaneous excitatory and inhibitory post-synaptic conductances. In this method, neurons are grouped into large populations of similar neurons. For each population, one calculates the evolution of a probability density function (PDF) which describes the distribution of neurons over state space. The population firing rate is then given by the total flux of probability across the threshold voltage for firing an action potential. Extending the method beyond instantaneous synapses is necessary for obtaining accurate results, because synaptic kinetics play an important role in network dynamics. Embellishments incorporating more realistic synaptic kinetics for the underlying neuron model increase the dimension of the PDF, which was one-dimensional in the instantaneous synapse case. This increase in dimension causes a substantial increase in computation time to find the exact PDF, decreasing the computational speed advantage of the population density method over direct Monte Carlo simulation. We report here on a one-dimensional model of the PDF for neurons with arbitrary synaptic kinetics. The method is more accurate than the mean-field method in the steady state, where the mean-field approximation works best, and also under dynamic-stimulus conditions. The method is much faster than direct simulations. Limitations of the method are demonstrated, and possible improvements are discussed.  相似文献   

12.
A previously developed method for efficiently simulating complex networks of integrate-and-fire neurons was specialized to the case in which the neurons have fast unitary postsynaptic conductances. However, inhibitory synaptic conductances are often slower than excitatory ones for cortical neurons, and this difference can have a profound effect on network dynamics that cannot be captured with neurons that have only fast synapses. We thus extend the model to include slow inhibitory synapses. In this model, neurons are grouped into large populations of similar neurons. For each population, we calculate the evolution of a probability density function (PDF), which describes the distribution of neurons over state-space. The population firing rate is given by the flux of probability across the threshold voltage for firing an action potential. In the case of fast synaptic conductances, the PDF was one-dimensional, as the state of a neuron was completely determined by its transmembrane voltage. An exact extension to slow inhibitory synapses increases the dimension of the PDF to two or three, as the state of a neuron now includes the state of its inhibitory synaptic conductance. However, by assuming that the expected value of a neuron's inhibitory conductance is independent of its voltage, we derive a reduction to a one-dimensional PDF and avoid increasing the computational complexity of the problem. We demonstrate that although this assumption is not strictly valid, the results of the reduced model are surprisingly accurate.  相似文献   

13.
Precision constrained stochastic resonance in a feedforward neural network   总被引:1,自引:0,他引:1  
Stochastic resonance (SR) is a phenomenon in which the response of a nonlinear system to a subthreshold information-bearing signal is optimized by the presence of noise. By considering a nonlinear system (network of leaky integrate-and-fire (LIF) neurons) that captures the functional dynamics of neuronal firing, we demonstrate that sensory neurons could, in principle harness SR to optimize the detection and transmission of weak stimuli. We have previously characterized this effect by use of signal-to-noise ratio (SNR). Here in addition to SNR, we apply an entropy-based measure (Fisher information) and compare the two measures of quantifying SR. We also discuss the performance of these two SR measures in a full precision floating point model simulated in Java and in a precision limited integer model simulated on a field programmable gate array (FPGA). We report in this study that stochastic resonance which is mainly associated with floating point implementations is possible in both a single LIF neuron and a network of LIF neurons implemented on lower resolution integer based digital hardware. We also report that such a network can improve the SNR and Fisher information of the output over a single LIF neuron.  相似文献   

14.
T Tanaka  T Aoyagi  T Kaneko 《Neural computation》2012,24(10):2700-2725
We propose a new principle for replicating receptive field properties of neurons in the primary visual cortex. We derive a learning rule for a feedforward network, which maintains a low firing rate for the output neurons (resulting in temporal sparseness) and allows only a small subset of the neurons in the network to fire at any given time (resulting in population sparseness). Our learning rule also sets the firing rates of the output neurons at each time step to near-maximum or near-minimum levels, resulting in neuronal reliability. The learning rule is simple enough to be written in spatially and temporally local forms. After the learning stage is performed using input image patches of natural scenes, output neurons in the model network are found to exhibit simple-cell-like receptive field properties. When the output of these simple-cell-like neurons are input to another model layer using the same learning rule, the second-layer output neurons after learning become less sensitive to the phase of gratings than the simple-cell-like input neurons. In particular, some of the second-layer output neurons become completely phase invariant, owing to the convergence of the connections from first-layer neurons with similar orientation selectivity to second-layer neurons in the model network. We examine the parameter dependencies of the receptive field properties of the model neurons after learning and discuss their biological implications. We also show that the localized learning rule is consistent with experimental results concerning neuronal plasticity and can replicate the receptive fields of simple and complex cells.  相似文献   

15.
We investigate theoretically the conditions for the emergence of synchronous activity in large networks, consisting of two populations of extensively connected neurons, one excitatory and one inhibitory. The neurons are modeled with quadratic integrate-and-fire dynamics, which provide a very good approximation for the subthreshold behavior of a large class of neurons. In addition to their synaptic recurrent inputs, the neurons receive a tonic external input that varies from neuron to neuron. Because of its relative simplicity, this model can be studied analytically. We investigate the stability of the asynchronous state (AS) of the network with given average firing rates of the two populations. First, we show that the AS can remain stable even if the synaptic couplings are strong. Then we investigate the conditions under which this state can be destabilized. We show that this can happen in four generic ways. The first is a saddle-node bifurcation, which leads to another state with different average firing rates. This bifurcation, which occurs for strong enough recurrent excitation, does not correspond to the emergence of synchrony. In contrast, in the three other instability mechanisms, Hopf bifurcations, which correspond to the emergence of oscillatory synchronous activity, occur. We show that these mechanisms can be differentiated by the firing patterns they generate and their dependence on the mutual interactions of the inhibitory neurons and cross talk between the two populations. We also show that besides these codimension 1 bifurcations, the system can display several codimension 2 bifurcations: Takens-Bogdanov, Gavrielov-Guckenheimer, and double Hopf bifurcations.  相似文献   

16.
Shortest path tree (SPT) computation is a critical issue in many real world problems, such as routing in networks. It is also a constrained optimization problem, which has been studied by many authors in recent years. Typically, it is solved by heuristic algorithms, such as the famous Dijkstra's algorithm, which can quickly provide a good solution in most instances. However, with the scale of problem increasing, these methods are inefficient and may consume a considerable amount of CPU time. Neural networks, which are massively parallel models, can solve this question easily. This paper presents an efficient modified continued pulse coupled neural network (MCPCNN) model for SPT computation in a large scale instance. The proposed model is topologically organized with only local lateral connections among neurons. The start neuron fires first, and then the firing event spreads out through the lateral connections among the neurons, like the propagation of a wave. Each neuron records its parent, that is, the neighbor which caused it to fire. It proves that the generated wave in the network spreads outward with travel times proportional to the connection weight between neurons. Thus, the generated path is always the global optimal shortest path from the source to all destinations. The proposed model is also applied to generate SPTs for a real given graph step by step. The effectiveness and efficiency of the proposed approach is demonstrated through simulation and comparison studies.  相似文献   

17.
呼吸节律的产生起源于Pre-Btzinger复合体,这其中包含了Pre-Btzinger神经元在内的许多种呼吸神经元的参与,这些呼吸神经元和肺通过突触联系构成了脑桥-髓质的动态呼吸网络.由于目前对于呼吸节律产生和变化的网络机制尙不完全清楚,因此,本文从非线性动力学角度入手,通过构造与实际结构比较接近的呼吸网络模型,分别考察了网络中单个Pre-Btzinger中间神经元多样性的发放模式以及网络中群体神经元周期性和同步性的放电变化.数值结果为进一步揭示呼吸节律的产生和调控机制提供了一定的帮助.  相似文献   

18.
We report on deterministic and stochastic evolutions of firing states through a feedforward neural network with Mexican-hat-type connectivity. The prevalence of columnar structures in a cortex implies spatially localized connectivity between neural pools. Although feedforward neural network models with homogeneous connectivity have been intensively studied within the context of the synfire chain, the effect of local connectivity has not yet been studied so thoroughly. When a neuron fires independently, the dynamics of macroscopic state variables (a firing rate and spatial eccentricity of a firing pattern) is deterministic from the law of large numbers. Possible stable firing states, which are derived from deterministic evolution equations, are uniform, localized, and nonfiring. The multistability of these three states is obtained where the excitatory and inhibitory interactions among neurons are balanced. When the presynapse-dependent variance in connection efficacies is incorporated into the network, the variance generates common noise. Then the evolution of the macroscopic state variables becomes stochastic, and neurons begin to fire in a correlated manner due to the common noise. The correlation structure that is generated by common noise exhibits a nontrivial bimodal distribution. The development of a firing state through neural layers does not converge to a certain fixed point but keeps on fluctuating.  相似文献   

19.
Chaotic dynamics in a recurrent neural network model, in which limit cycle memory attractors are stored, is investigated by means of numerical methods. In particular, we focus on quick and sensitive response characteristics of chaotic memory dynamics to external input, which consists of part of an embedded memory attractor. We have calculated the correlation functions between the firing activities of neurons to understand the dynamical mechanisms of rapid responses. The results of the latter calculation show that quite strong correlations occur very quickly between almost all neurons within 1 ~ 2 updating steps after applying a partial input. They suggest that the existence of dynamical correlations or, in other words, transient correlations in chaos, play a very important role in quick and/or sensitive responses.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号