首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 625 毫秒
1.
We report on deterministic and stochastic evolutions of firing states through a feedforward neural network with Mexican-hat-type connectivity. The prevalence of columnar structures in a cortex implies spatially localized connectivity between neural pools. Although feedforward neural network models with homogeneous connectivity have been intensively studied within the context of the synfire chain, the effect of local connectivity has not yet been studied so thoroughly. When a neuron fires independently, the dynamics of macroscopic state variables (a firing rate and spatial eccentricity of a firing pattern) is deterministic from the law of large numbers. Possible stable firing states, which are derived from deterministic evolution equations, are uniform, localized, and nonfiring. The multistability of these three states is obtained where the excitatory and inhibitory interactions among neurons are balanced. When the presynapse-dependent variance in connection efficacies is incorporated into the network, the variance generates common noise. Then the evolution of the macroscopic state variables becomes stochastic, and neurons begin to fire in a correlated manner due to the common noise. The correlation structure that is generated by common noise exhibits a nontrivial bimodal distribution. The development of a firing state through neural layers does not converge to a certain fixed point but keeps on fluctuating.  相似文献   

2.
Oscillatory and synchronized neural activities are commonly found in the brain, and evidence suggests that many of them are caused by global feedback. Their mechanisms and roles in information processing have been discussed often using purely feedforward networks or recurrent networks with constant inputs. On the other hand, real recurrent neural networks are abundant and continually receive information-rich inputs from the outside environment or other parts of the brain. We examine how feedforward networks of spiking neurons with delayed global feedback process information about temporally changing inputs. We show that the network behavior is more synchronous as well as more correlated with and phase-locked to the stimulus when the stimulus frequency is resonant with the inherent frequency of the neuron or that of the network oscillation generated by the feedback architecture. The two eigenmodes have distinct dynamical characteristics, which are supported by numerical simulations and by analytical arguments based on frequency response and bifurcation theory. This distinction is similar to the class I versus class II classification of single neurons according to the bifurcation from quiescence to periodic firing, and the two modes depend differently on system parameters. These two mechanisms may be associated with different types of information processing.  相似文献   

3.
Current improvements in the performance of deep neural networks are partly due to the proposition of rectified linear units. A ReLU activation function outputs zero for negative component, inducing the death of some neurons and a bias shift of the outputs, which causes oscillations and impedes learning. According to the theory that “zero mean activations improve learning ability”, a softplus linear unit (SLU) is proposed as an adaptive activation function that can speed up learning and improve performance in deep convolutional neural networks. Firstly, for the reduction of the bias shift, negative inputs are processed using the softplus function, and a general form of the SLU function is proposed. Secondly, the parameters of the positive component are fixed to control vanishing gradients. Thirdly, the rules for updating the parameters of the negative component are established to meet back- propagation requirements. Finally, we designed deep auto-encoder networks and conducted several experiments with them on the MNIST dataset for unsupervised learning. For supervised learning, we designed deep convolutional neural networks and conducted several experiments with them on the CIFAR-10 dataset. The experiments have shown faster convergence and better performance for image classification of SLU-based networks compared with rectified activation functions.  相似文献   

4.
Neurons that sustain elevated firing in the absence of stimuli have been found in many neural systems. In graded persistent activity, neurons can sustain firing at many levels, suggesting a widely found type of network dynamics in which networks can relax to any one of a continuum of stationary states. The reproduction of these findings in model networks of nonlinear neurons has turned out to be nontrivial. A particularly insightful model has been the "bump attractor," in which a continuous attractor emerges through an underlying symmetry in the network connectivity matrix. This model, however, cannot account for data in which the persistent firing of neurons is a monotonic -- rather than a bell-shaped -- function of a stored variable. Here, we show that the symmetry used in the bump attractor network can be employed to create a whole family of continuous attractor networks, including those with monotonic tuning. Our design is based on tuning the external inputs to networks that have a connectivity matrix with Toeplitz symmetry. In particular, we provide a complete analytical solution of a line attractor network with monotonic tuning and show that for many other networks, the numerical tuning of synaptic weights reduces to the computation of a single parameter.  相似文献   

5.
Brunel N  Hansel D 《Neural computation》2006,18(5):1066-1110
GABAergic interneurons play a major role in the emergence of various types of synchronous oscillatory patterns of activity in the central nervous system. Motivated by these experimental facts, modeling studies have investigated mechanisms for the emergence of coherent activity in networks of inhibitory neurons. However, most of these studies have focused either when the noise in the network is absent or weak or in the opposite situation when it is strong. Hence, a full picture of how noise affects the dynamics of such systems is still lacking. The aim of this letter is to provide a more comprehensive understanding of the mechanisms by which the asynchronous states in large, fully connected networks of inhibitory neurons are destabilized as a function of the noise level. Three types of single neuron models are considered: the leaky integrate-and-fire (LIF) model, the exponential integrate-and-fire (EIF), model and conductance-based models involving sodium and potassium Hodgkin-Huxley (HH) currents. We show that in all models, the instabilities of the asynchronous state can be classified in two classes. The first one consists of clustering instabilities, which exist in a restricted range of noise. These instabilities lead to synchronous patterns in which the population of neurons is broken into clusters of synchronously firing neurons. The irregularity of the firing patterns of the neurons is weak. The second class of instabilities, termed oscillatory firing rate instabilities, exists at any value of noise. They lead to cluster state at low noise. As the noise is increased, the instability occurs at larger coupling, and the pattern of firing that emerges becomes more irregular. In the regime of high noise and strong coupling, these instabilities lead to stochastic oscillations in which neurons fire in an approximately Poisson way with a common instantaneous probability of firing that oscillates in time.  相似文献   

6.
Synchronization of neural signals has been proposed as a temporal coding scheme representing cooperated computation in distributed cortical networks. Previous theoretical studies in that direction mainly focused on the synchronization of coupled oscillatory subsystems and neglected more complex dynamical modes, that already exist on the single-unit level. In this paper we study the parametrized time-discrete dynamics of two coupled recurrent networks of graded neurons. Conditions for the existence of partially synchronized dynamics of these systems are derived, referring to a situation where only subsets of neurons in each sub-network are synchronous. The coupled networks can have different architectures and even a different number of neurons. Periodic as well as quasiperiodic and chaotic attractors constrained to a manifold M of synchronized components are observed. Examples are discussed for coupled 3-neuron networks having different architectures, and for coupled 2-neuron and 3-neuron networks. Partial synchronization of different degrees is demonstrated by numerical results for selected sets of parameters. In conclusion, the results show that synchronization phenomena far beyond completely synchronized oscillations can occur even in simple coupled networks. The type of the synchronization depends in an intricate way on stimuli, history and connectivity as well as other parameters of the network. Specific inputs can further switch between different operational modes in a complex way, suggesting a similarly rich spatio-temporal behaviour in real neural systems.  相似文献   

7.
Types of. mechanisms for and stability of synchrony are discussed in the context of two-compartment CA3 pyramidal cell and interneuron model networks. We show how the strength and timing of inhibitory and excitatory synaptic inputs work together to produce either perfectly synchronized or nearly synchronized oscillations, across different burst or spiking modes of firing. The analysis shows how excitatory inputs tend to desynchronize cells, and how common, slowly decaying inhibition can be used to synchronize them. We also introduce the concept of 'equivalent networks' in which networks with different architectures and synaptic connections display identical firing patterns.  相似文献   

8.
Lu Y  Sato Y  Amari S 《Neural computation》2011,23(5):1248-1260
A neural field is a continuous version of a neural network model accounting for dynamical pattern forming from populational firing activities in neural tissues. These patterns include standing bumps, moving bumps, traveling waves, target waves, breathers, and spiral waves, many of them observed in various brain areas. They can be categorized into two types: a wave-like activity spreading over the field and a particle-like localized activity. We show through numerical experiments that localized traveling excitation patterns (traveling bumps), which behave like particles, exist in a two-dimensional neural field with excitation and inhibition mechanisms. The traveling bumps do not require any geometric restriction (boundary) to prevent them from propagating away, a fact that might shed light on how neurons in the brain are functionally organized. Collisions of traveling bumps exhibit rich phenomena; they might reveal the manner of information processing in the cortex and be useful in various applications. The trajectories of traveling bumps can be controlled by external inputs.  相似文献   

9.
Some neurons encode information about the orientation or position of an animal, and can maintain their response properties in the absence of visual input. Examples include head direction cells in rats and primates, place cells in rats and spatial view cells in primates. 'Continuous attractor' neural networks model these continuous physical spaces by using recurrent collateral connections between the neurons which reflect the distance between the neurons in the state space (e.g. head direction space) of the animal. These networks maintain a localized packet of neuronal activity representing the current state of the animal. We show how the synaptic connections in a one-dimensional continuous attractor network (of for example head direction cells) could be self-organized by associative learning. We also show how the activity packet could be moved from one location to another by idiothetic (self-motion) inputs, for example vestibular or proprioceptive, and how the synaptic connections could self-organize to implement this. The models described use 'trace' associative synaptic learning rules that utilize a form of temporal average of recent cell activity to associate the firing of rotation cells with the recent change in the representation of the head direction in the continuous attractor. We also show how a nonlinear neuronal activation function that could be implemented by NMDA receptors could contribute to the stability of the activity packet that represents the current state of the animal.  相似文献   

10.
Synchronous firing limits the amount of information that can be extracted by averaging the firing rates of similarly tuned neurons. Here, we show that the loss of such rate-coded information due to synchronous oscillations between retinal ganglion cells can be overcome by exploiting the information encoded by the correlations themselves. Two very different models, one based on axon-mediated inhibitory feedback and the other on oscillatory common input, were used to generate artificial spike trains whose synchronous oscillations were similar to those measured experimentally. Pooled spike trains were summed into a threshold detector whose output was classified using Bayesian discrimination. For a threshold detector with short summation times, realistic oscillatory input yielded superior discrimination of stimulus intensity compared to rate-matched Poisson controls. Even for summation times too long to resolve synchronous inputs, gamma band oscillations still contributed to improved discrimination by reducing the total spike count variability, or Fano factor. In separate experiments in which neurons were synchronized in a stimulus-dependent manner without attendant oscillations, the Fano factor increased markedly with stimulus intensity, implying that stimulus-dependent oscillations can offset the increased variability due to synchrony alone.  相似文献   

11.
We present a general approximation method for the mathematical analysis of spatially localized steady-state solutions in nonlinear neural field models. These models comprise several layers of excitatory and inhibitory cells. Coupling kernels between and inside layers are assumed to be gaussian shaped. In response to spatially localized (i.e., tuned) inputs, such networks typically reveal stationary localized activity profiles in the different layers. Qualitative properties of these solutions, like response amplitudes and tuning widths, are approximated for a whole class of nonlinear rate functions that obey a power law above some threshold and that are zero below. A special case of these functions is the semilinear function, which is commonly used in neural field models. The method is then applied to models for orientation tuning in cortical simple cells: first, to the one-layer model with "difference of gaussians" connectivity kernel developed by Carandini and Ringach (1997) as an abstraction of the biologically detailed simulations of Somers, Nelson, and Sur (1995); second, to a two-field model comprising excitatory and inhibitory cells in two separate layers. Under certain conditions, both models have the same steady states. Comparing simulations of the field models and results derived from the approximation method, we find that the approximation well predicts the tuning behavior of the full model. Moreover, explicit formulas for approximate amplitudes and tuning widths in response to changing input strength are given and checked numerically. Comparing the network behavior for different nonlinearities, we find that the only rate function (from the class of functions under study) that leads to constant tuning widths and a linear increase of firing rates in response to increasing input is the semilinear function. For other nonlinearities, the qualitative network response depends on whether the model neurons operate in a convex (e.g., x(2)) or concave (e.g., sqrt(x)) regime of their rate function. In the first case, tuning gradually changes from input driven at low input strength (broad tuning strongly depending on the input and roughly linear amplitudes in response to input strength) to recurrently driven at moderate input strength (sharp tuning, supralinear increase of amplitudes in response to input strength). For concave rate functions, the network reveals stable hysteresis between a state at low firing rates and a tuned state at high rates. This means that the network can "memorize" tuning properties of a previously shown stimulus. Sigmoid rate functions can combine both effects. In contrast to the Carandini-Ringach model, the two-field model further reveals oscillations with typical frequencies in the beta and gamma range, when the excitatory and inhibitory connections are relatively strong. This suggests a rhythmic modulation of tuning properties during cortical oscillations.  相似文献   

12.
The vestibulo-ocular reflex (VOR) is characterized by a short-latency, high-fidelity eye movement response to head rotations at frequencies up to 20 Hz. Electrophysiological studies of medial vestibular nucleus (MVN) neurons, however, show that their response to sinusoidal currents above 10 to 12 Hz is highly nonlinear and distorted by aliasing for all but very small current amplitudes. How can this system function in vivo when single cell response cannot explain its operation? Here we show that the necessary wide VOR frequency response may be achieved not by firing rate encoding of head velocity in single neurons, but in the integrated population response of asynchronously firing, intrinsically active neurons. Diffusive synaptic noise and the pacemaker-driven, intrinsic firing of MVN cells synergistically maintain asynchronous, spontaneous spiking in a population of model MVN neurons over a wide range of input signal amplitudes and frequencies. Response fidelity is further improved by a reciprocal inhibitory link between two MVN populations, mimicking the vestibular commissural system in vivo, but only if asynchrony is maintained by noise and pacemaker inputs. These results provide a previously missing explanation for the full range of VOR function and a novel account of the role of the intrinsic pacemaker conductances in MVN cells. The values of diffusive noise and pacemaker currents that give optimal response fidelity yield firing statistics similar to those in vivo, suggesting that the in vivo network is tuned to optimal performance. While theoretical studies have argued that noise and population heterogeneity can improve coding, to our knowledge this is the first evidence indicating that these parameters are indeed tuned to optimize coding fidelity in a neural control system in vivo.  相似文献   

13.
In this paper, we show that noise injection into inputs in unsupervised learning neural networks does not improve their performance as it does in supervised learning neural networks. Specifically, we show that training noise degrades the classification ability of a sparsely connected version of the Hopfield neural network, whereas the performance of a sparsely connected winner-take-all neural network does not depend on the injected training noise.  相似文献   

14.
Ralf  Ulrich   《Neurocomputing》2007,70(16-18):2758
Neural networks are intended to be used in future nanoelectronic technology since these architectures seem to be robust to malfunctioning elements and noise in its inputs and parameters. In this work, the robustness of radial basis function networks is analyzed in order to operate in noisy and unreliable environment. Furthermore, upper bounds on the mean square error under noise contaminated parameters and inputs are determined if the network parameters are constrained. To achieve robuster neural network architectures fundamental methods are introduced to identify sensitive parameters and neurons.  相似文献   

15.
Fast oscillations and in particular gamma-band oscillation (20-80 Hz) are commonly observed during brain function and are at the center of several neural processing theories. In many cases, mathematical analysis of fast oscillations in neural networks has been focused on the transition between irregular and oscillatory firing viewed as an instability of the asynchronous activity. But in fact, brain slice experiments as well as detailed simulations of biological neural networks have produced a large corpus of results concerning the properties of fully developed oscillations that are far from this transition point. We propose here a mathematical approach to deal with nonlinear oscillations in a network of heterogeneous or noisy integrate-and-fire neurons connected by strong inhibition. This approach involves limited mathematical complexity and gives a good sense of the oscillation mechanism, making it an interesting tool to understand fast rhythmic activity in simulated or biological neural networks. A surprising result of our approach is that under some conditions, a change of the strength of inhibition only weakly influences the period of the oscillation. This is in contrast to standard theoretical and experimental models of interneuron network gamma oscillations (ING), where frequency tightly depends on inhibition strength, but it is similar to observations made in some in vitro preparations in the hippocampus and the olfactory bulb and in some detailed network models. This result is explained by the phenomenon of suppression that is known to occur in strongly coupled oscillating inhibitory networks but had not yet been related to the behavior of oscillation frequency.  相似文献   

16.
We describe the aggregation process of the typical artificial neuron. We introduce the concept of a fuzzy linguistic quantifier and describe the process for determining the truth of propositions containing linguistic quantifiers. We show how this truth value can be viewed as the firing level of an artificial neuron. We show the relationship between fuzzy sets and neural inputs. A new class of neurons called owa-neurons is described. A learning algorithm for this class of neurons is presented. We provide a methodology for processing information in non-numeric neural networks. © 1992 John Wiley & Sons, Inc.  相似文献   

17.
Coincident firing of neurons projecting to a common target cell is likely to raise the probability of firing of this postsynaptic cell. Therefore, synchronized firing constitutes a significant event for postsynaptic neurons and is likely to play a role in neuronal information processing. Physiological data on synchronized firing in cortical networks are based primarily on paired recordings and cross-correlation analysis. However, pair-wise correlations among all inputs onto a postsynaptic neuron do not uniquely determine the distribution of simultaneous postsynaptic events. We develop a framework in order to calculate the amount of synchronous firing that, based on maximum entropy, should exist in a homogeneous neural network in which the neurons have known pair-wise correlations and higher-order structure is absent. According to the distribution of maximal entropy, synchronous events in which a large proportion of the neurons participates should exist even in the case of weak pair-wise correlations. Network simulations also exhibit these highly synchronous events in the case of weak pair-wise correlations. If such a group of neurons provides input to a common postsynaptic target, these network bursts may enhance the impact of this input, especially in the case of a high postsynaptic threshold. The proportion of neurons participating in synchronous bursts can be approximated by our method under restricted conditions. When these conditions are not fulfilled, the spike trains have less than maximal entropy, which is indicative of the presence of higher-order structure. In this situation, the degree of synchronicity cannot be derived from the pair-wise correlations.  相似文献   

18.
We study the emergence of synchronized burst activity in networks of neurons with spike adaptation. We show that networks of tonically firing adapting excitatory neurons can evolve to a state where the neurons burst in a synchronized manner. The mechanism leading to this burst activity is analyzed in a network of integrate-and-fire neurons with spike adaptation. The dependence of this state on the different network parameters is investigated, and it is shown that this mechanism is robust against inhomogeneities, sparseness of the connectivity, and noise. In networks of two populations, one excitatory and one inhibitory, we show that decreasing the inhibitory feedback can cause the network to switch from a tonically active, asynchronous state to the synchronized bursting state. Finally, we show that the same mechanism also causes synchronized burst activity in networks of more realistic conductance-based model neurons.  相似文献   

19.
Based on the principle of energy coding, an energy function of a variety of electric potentials of a neural population in cerebral cortex is formulated. The energy function is used to describe the energy evolution of the neuronal population with time and the coupled relationship between neurons at the subthreshold and the suprathreshold states. The Hamiltonian motion equation with the membrane potential is obtained from the neuroelectrophysiological data contaminated by Gaussian white noise. The results of this research show that the mean membrane potential is the exact solution of the motion equation of the membrane potential developed in a previously published paper. It also shows that the Hamiltonian energy function derived in this brief is not only correct but also effective. Particularly, based on the principle of energy coding, an interesting finding is that in some subsets of neurons, firing action potentials at the suprathreshold and some others simultaneously perform activities at the subthreshold level in neural ensembles. Notably, this kind of coupling has not been found in other models of biological neural networks.  相似文献   

20.
It has been a matter of debate how firing rates or spatiotemporal spike patterns carry information in the brain. Recent experimental and theoretical work in part showed that these codes, especially a population rate code and a synchronous code, can be dually used in a single architecture. However, we are not yet able to relate the role of firing rates and synchrony to the spatiotemporal structure of inputs and the architecture of neural networks. In this article, we examine how feedforward neural networks encode multiple input sources in the firing patterns. We apply spike-time-dependent plasticity as a fundamental mechanism to yield synaptic competition and the associated input filtering. We use the Fokker-Planck formalism to analyze the mechanism for synaptic competition in the case of multiple inputs, which underlies the formation of functional clusters in downstream layers in a self-organizing manner. Depending on the types of feedback coupling and shared connectivity, clusters are independently engaged in population rate coding or synchronous coding, or they interact to serve as input filters. Classes of dual codings and functional roles of spike-time-dependent plasticity are also discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号