首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Few algorithms for supervised training of spiking neural networks exist that can deal with patterns of multiple spikes, and their computational properties are largely unexplored. We demonstrate in a set of simulations that the ReSuMe learning algorithm can successfully be applied to layered neural networks. Input and output patterns are encoded as spike trains of multiple precisely timed spikes, and the network learns to transform the input trains into target output trains. This is done by combining the ReSuMe learning algorithm with multiplicative scaling of the connections of downstream neurons. We show in particular that layered networks with one hidden layer can learn the basic logical operations, including Exclusive-Or, while networks without hidden layer cannot, mirroring an analogous result for layered networks of rate neurons. While supervised learning in spiking neural networks is not yet fit for technical purposes, exploring computational properties of spiking neural networks advances our understanding of how computations can be done with spike trains.  相似文献   

2.
The objective of this work is to use a multi-core embedded platform as computing architectures for neural applications relevant to neuromorphic engineering: e.g., robotics, and artificial and spiking neural networks. Recently, it has been shown how spike-timing-dependent plasticity (STDP) can play a key role in pattern recognition. In particular, multiple repeating arbitrary spatio-temporal spike patterns hidden in spike trains can be robustly detected and learned by multiple neurons equipped with spike-timing-dependent plasticity listening to the incoming spike trains. This paper presents an implementation on a biological time scale of STDP algorithm to localize a repeating spatio-temporal spike patterns on a multi-core embedded platform.  相似文献   

3.
When periodic current is injected into an integrate-and-fire model neuron, the voltage as a function of time converges from different initial conditions to an attractor that produces reproducible sequences of spikes. The attractor reliability is a measure of the stability of spike trains against intrinsic noise and is quantified here as the inverse of the number of distinct spike trains obtained in response to repeated presentations of the same stimulus. High reliability characterizes neurons that can support a spike-time code, unlike neurons with discharges forming a renewal process (such as a Poisson process). These two classes of responses cannot be distinguished using measures based on the spike-time histogram, but they can be identified by the attractor dynamics of spike trains, as shown here using a new method for calculating the attractor reliability. We applied these methods to spike trains obtained from current injection into cortical neurons recorded in vitro. These spike trains did not form a renewal process and had a higher reliability compared to renewal-like processes with the same spike-time histogram.  相似文献   

4.
Spiking neurons are very flexible computational modules, which can implement with different values of their adjustable synaptic parameters an enormous variety of different transformations F from input spike trains to output spike trains. We examine in this letter the question to what extent a spiking neuron with biologically realistic models for dynamic synapses can be taught via spike-timing-dependent plasticity (STDP) to implement a given transformation F. We consider a supervised learning paradigm where during training, the output of the neuron is clamped to the target signal (teacher forcing). The well-known perceptron convergence theorem asserts the convergence of a simple supervised learning algorithm for drastically simplified neuron models (McCulloch-Pitts neurons). We show that in contrast to the perceptron convergence theorem, no theoretical guarantee can be given for the convergence of STDP with teacher forcing that holds for arbitrary input spike patterns. On the other hand, we prove that average case versions of the perceptron convergence theorem hold for STDP in the case of uncorrelated and correlated Poisson input spike trains and simple models for spiking neurons. For a wide class of cross-correlation functions of the input spike trains, the resulting necessary and sufficient condition can be formulated in terms of linear separability, analogously as the well-known condition of learnability by perceptrons. However, the linear separability criterion has to be applied here to the columns of the correlation matrix of the Poisson input. We demonstrate through extensive computer simulations that the theoretically predicted convergence of STDP with teacher forcing also holds for more realistic models for neurons, dynamic synapses, and more general input distributions. In addition, we show through computer simulations that these positive learning results hold not only for the common interpretation of STDP, where STDP changes the weights of synapses, but also for a more realistic interpretation suggested by experimental data where STDP modulates the initial release probability of dynamic synapses.  相似文献   

5.
Statistical models of neural activity are integral to modern neuroscience. Recently interest has grown in modeling the spiking activity of populations of simultaneously recorded neurons to study the effects of correlations and functional connectivity on neural information processing. However, any statistical model must be validated by an appropriate goodness-of-fit test. Kolmogorov-Smirnov tests based on the time-rescaling theorem have proven to be useful for evaluating point-process-based statistical models of single-neuron spike trains. Here we discuss the extension of the time-rescaling theorem to the multivariate (neural population) case. We show that even in the presence of strong correlations between spike trains, models that neglect couplings between neurons can be erroneously passed by the univariate time-rescaling test. We present the multivariate version of the time-rescaling theorem and provide a practical step-by-step procedure for applying it to testing the sufficiency of neural population models. Using several simple analytically tractable models and more complex simulated and real data sets, we demonstrate that important features of the population activity can be detected only using the multivariate extension of the test.  相似文献   

6.
Masuda N  Aihara K 《Neural computation》2003,15(6):1341-1372
Neuronal information processing is often studied on the basis of spiking patterns. The relevant statistics such as firing rates calculated with the peri-stimulus time histogram are obtained by averaging spiking patterns over many experimental runs. However, animals should respond to one experimental stimulation in real situations, and what is available to the brain is not the trial statistics but the population statistics. Consequently, physiological ergodicity, namely, the consistency between trial averaging and population averaging, is implicitly assumed in the data analyses, although it does not trivially hold true. In this letter, we investigate how characteristics of noisy neural network models, such as single neuron properties, external stimuli, and synaptic inputs, affect the statistics of firing patterns. In particular, we show that how high membrane potential sensitivity to input fluctuations, inability of neurons to remember past inputs, external stimuli with large variability and temporally separated peaks, and relatively few contributions of synaptic inputs result in spike trains that are reproducible over many trials. The reproducibility of spike trains and synchronous firing are contrasted and related to the ergodicity issue. Several numerical calculations with neural network examples are carried out to support the theoretical results.  相似文献   

7.
It has been proposed that cortical neurons organize dynamically into functional groups (cell assemblies) by the temporal structure of their joint spiking activity. Here, we describe a novel method to detect conspicuous patterns of coincident joint spike activity among simultaneously recorded single neurons. The statistical significance of these unitary events of coincident joint spike activity is evaluated by the joint-surprise. The method is tested and calibrated on the basis of simulated, stationary spike trains of independently firing neurons, into which coincident joint spike events were inserted under controlled conditions. The sensitivity and specificity of the method are investigated for their dependence on physiological parameters (firing rate, coincidence precision, coincidence pattern complexity) and temporal resolution of the analysis. In the companion article in this issue, we describe an extension of the method, designed to deal with nonstationary firing rates.  相似文献   

8.
Aoki T  Aoyagi T 《Neural computation》2007,19(10):2720-2738
Although context-dependent spike synchronization among populations of neurons has been experimentally observed, its functional role remains controversial. In this modeling study, we demonstrate that in a network of spiking neurons organized according to spike-timing-dependent plasticity, an increase in the degree of synchrony of a uniform input can cause transitions between memorized activity patterns in the order presented during learning. Furthermore, context-dependent transitions from a single pattern to multiple patterns can be induced under appropriate learning conditions. These findings suggest one possible functional role of neuronal synchrony in controlling the flow of information by altering the dynamics of the network.  相似文献   

9.
Analyzing the dependencies between spike trains is an important step in understanding how neurons work in concert to represent biological signals. Usually this is done for pairs of neurons at a time using correlation-based techniques. Chornoboy, Schramm, and Karr (1988) proposed maximum likelihood methods for the simultaneous analysis of multiple pair-wise interactions among an ensemble of neurons. One of these methods is an iterative, continuous-time estimation algorithm for a network likelihood model formulated in terms of multiplicative conditional intensity functions. We devised a discrete-time version of this algorithm that includes a new, efficient computational strategy, a principled method to compute starting values, and a principled stopping criterion. In an analysis of simulated neural spike trains from ensembles of interacting neurons, the algorithm recovered the correct connectivity matrices and interaction parameters. In the analysis of spike trains from an ensemble of rat hippocampal place cells, the algorithm identified a connectivity matrix and interaction parameters consistent with the pattern of conjoined firing predicted by the overlap of the neurons' spatial receptive fields. These results suggest that the network likelihood model can be an efficient tool for the analysis of ensemble spiking activity.  相似文献   

10.
介绍累积放电脉冲神经元的数学描述;讨论脉冲神经元如何将激励信号转化为脉冲序列;讨论脉冲神经元如何将输入脉冲序列转化为输出脉冲序列。实验结果表明脉冲神经元具有很好的信息表示能力、信号鉴别能力和图像信号重构能力。给出利用脉冲神经网络进行图像信号处理的方法。  相似文献   

11.
In order to detect members of a functional group (cell assembly) in simultaneously recorded neuronal spiking activity, we adopted the widely used operational definition that membership in a common assembly is expressed in near-simultaneous spike activity. Unitary event analysis, a statistical method to detect the significant occurrence of coincident spiking activity in stationary data, was recently developed (see the companion article in this issue). The technique for the detection of unitary events is based on the assumption that the underlying processes are stationary in time. This requirement, however, is usually not fulfilled in neuronal data. Here we describe a method that properly normalizes for changes of rate: the unitary events by moving window analysis (UEMWA). Analysis for unitary events is performed separately in overlapping time segments by sliding a window of constant width along the data. In each window, stationarity is assumed. Performance and sensitivity are demonstrated by use of simulated spike trains of independently firing neurons, into which coincident events are inserted. If cortical neurons organize dynamically into functional groups, the occurrence of near-simultaneous spike activity should be time varying and related to behavior and stimuli. UEMWA also accounts for these potentially interesting nonstationarities and allows locating them in time. The potential of the new method is illustrated by results from multiple single-unit recordings from frontal and motor cortical areas in awake, behaving monkey.  相似文献   

12.
Spike trains from cortical neurons show a high degree of irregularity, with coefficients of variation (CV) of their interspike interval (ISI) distribution close to or higher than one. It has been suggested that this irregularity might be a reflection of a particular dynamical state of the local cortical circuit in which excitation and inhibition balance each other. In this "balanced" state, the mean current to the neurons is below threshold, and firing is driven by current fluctuations, resulting in irregular Poisson-like spike trains. Recent data show that the degree of irregularity in neuronal spike trains recorded during the delay period of working memory experiments is the same for both low-activity states of a few Hz and for elevated, persistent activity states of a few tens of Hz. Since the difference between these persistent activity states cannot be due to external factors coming from sensory inputs, this suggests that the underlying network dynamics might support coexisting balanced states at different firing rates. We use mean field techniques to study the possible existence of multiple balanced steady states in recurrent networks of current-based leaky integrate-and-fire (LIF) neurons. To assess the degree of balance of a steady state, we extend existing mean-field theories so that not only the firing rate, but also the coefficient of variation of the interspike interval distribution of the neurons, are determined self-consistently. Depending on the connectivity parameters of the network, we find bistable solutions of different types. If the local recurrent connectivity is mainly excitatory, the two stable steady states differ mainly in the mean current to the neurons. In this case, the mean drive in the elevated persistent activity state is suprathreshold and typically characterized by low spiking irregularity. If the local recurrent excitatory and inhibitory drives are both large and nearly balanced, or even dominated by inhibition, two stable states coexist, both with subthreshold current drive. In this case, the spiking variability in both the resting state and the mnemonic persistent state is large, but the balance condition implies parameter fine-tuning. Since the degree of required fine-tuning increases with network size and, on the other hand, the size of the fluctuations in the afferent current to the cells increases for small networks, overall we find that fluctuation-driven persistent activity in the very simplified type of models we analyze is not a robust phenomenon. Possible implications of considering more realistic models are discussed.  相似文献   

13.
Koyama S  Kass RE 《Neural computation》2008,20(7):1776-1795
Mathematical models of neurons are widely used to improve understanding of neuronal spiking behavior. These models can produce artificial spike trains that resemble actual spike train data in important ways, but they are not very easy to apply to the analysis of spike train data. Instead, statistical methods based on point process models of spike trains provide a wide range of data-analytical techniques. Two simplified point process models have been introduced in the literature: the time-rescaled renewal process (TRRP) and the multiplicative inhomogeneous Markov interval (m-IMI) model. In this letter we investigate the extent to which the TRRP and m-IMI models are able to fit spike trains produced by stimulus-driven leaky integrate-and-fire (LIF) neurons. With a constant stimulus, the LIF spike train is a renewal process, and the m-IMI and TRRP models will describe accurately the LIF spike train variability. With a time-varying stimulus, the probability of spiking under all three of these models depends on both the experimental clock time relative to the stimulus and the time since the previous spike, but it does so differently for the LIF, m-IMI, and TRRP models. We assessed the distance between the LIF model and each of the two empirical models in the presence of a time-varying stimulus. We found that while lack of fit of a Poisson model to LIF spike train data can be evident even in small samples, the m-IMI and TRRP models tend to fit well, and much larger samples are required before there is statistical evidence of lack of fit of the m-IMI or TRRP models. We also found that when the mean of the stimulus varies across time, the m-IMI model provides a better fit to the LIF data than the TRRP, and when the variance of the stimulus varies across time, the TRRP provides the better fit.  相似文献   

14.
Information encoding and computation with spikes and bursts   总被引:3,自引:0,他引:3  
Neurons compute and communicate by transforming synaptic input patterns into output spike trains. The nature of this transformation depends crucially on the properties of voltage-gated conductances in neuronal membranes. These intrinsic membrane conductances can enable neurons to generate different spike patterns including brief, high-frequency bursts that are commonly observed in a variety of brain regions. Here we examine how the membrane conductances that generate bursts affect neural computation and encoding. We simulated a bursting neuron model driven by random current input signal and superposed noise. We consider two issues: the timing reliability of different spike patterns and the computation performed by the neuron. Statistical analysis of the simulated spike trains shows that the timing of bursts is much more precise than the timing of single spikes. Furthermore, the number of spikes per burst is highly robust to noise. Next we considered the computation performed by the neuron: how different features of the input current are mapped into specific output spike patterns. Dimensional reduction and statistical classification techniques were used to determine the stimulus features triggering different firing patterns. Our main result is that spikes, and bursts of different durations, code for different stimulus features, which can be quantified without a priori assumptions about those features. These findings lead us to propose that the biophysical mechanisms of spike generation enables individual neurons to encode different stimulus features into distinct spike patterns.  相似文献   

15.
《Information Fusion》2007,8(3):227-251
This paper presents a new approach to higher-level information fusion in which knowledge and data are represented using semantic networks composed of coupled spiking neuron nodes. Networks of simulated spiking neurons have been shown to exhibit synchronization, in which sub-assemblies of nodes become phase locked to one another. This phase locking reflects the tendency of biological neural systems to produce synchronized neural assemblies, which have been hypothesized to be involved in binding of low-level features in the perception of objects. The approach presented in this paper embeds spiking neurons in a semantic network, in which a synchronized sub-assembly of nodes represents a hypothesis about a situation. Likewise, multiple synchronized assemblies that are out-of-phase with one another represent multiple hypotheses. The initial network is hand-coded, but additional semantic relationships can be established by associative learning mechanisms. This approach is demonstrated by simulation of proof-of-concept scenarios involving the tracking of suspected criminal vehicles between meeting places in an urban environment. Our results indicate that synchronized sub-assemblies of spiking nodes can be used to represent multiple simultaneous events occurring in the environment and to effectively learn new relationships between semantic items in response to these events. In contrast to models of synchronized spiking networks that use physiologically realistic parameters in order to explain limits in human short-term memory (STM) capacity, our networks are not subject to the same limitations in representational capacity for multiple simultaneous events. Simulations demonstrate that the representational capacity of our networks can be very large, but as more simultaneous events are represented by synchronized sub-assemblies, the effective learning rate for establishing new relationships decreases. We propose that this effect could be countered by speeding up the spiking dynamics of the networks (a tactic of limited availability to biological systems). Such a speedup would allow the number of simultaneous events to increase without compromising the learning rate.  相似文献   

16.
Lo JT 《Neural computation》2011,23(10):2626-2682
A biologically plausible low-order model (LOM) of biological neural networks is proposed. LOM is a recurrent hierarchical network of models of dendritic nodes and trees; spiking and nonspiking neurons; unsupervised, supervised covariance and accumulative learning mechanisms; feedback connections; and a scheme for maximal generalization. These component models are motivated and necessitated by making LOM learn and retrieve easily without differentiation, optimization, or iteration, and cluster, detect, and recognize multiple and hierarchical corrupted, distorted, and occluded temporal and spatial patterns. Four models of dendritic nodes are given that are all described as a hyperbolic polynomial that acts like an exclusive-OR logic gate when the model dendritic nodes input two binary digits. A model dendritic encoder that is a network of model dendritic nodes encodes its inputs such that the resultant codes have an orthogonality property. Such codes are stored in synapses by unsupervised covariance learning, supervised covariance learning, or unsupervised accumulative learning, depending on the type of postsynaptic neuron. A masking matrix for a dendritic tree, whose upper part comprises model dendritic encoders, enables maximal generalization on corrupted, distorted, and occluded data. It is a mathematical organization and idealization of dendritic trees with overlapped and nested input vectors. A model nonspiking neuron transmits inhibitory graded signals to modulate its neighboring model spiking neurons. Model spiking neurons evaluate the subjective probability distribution (SPD) of the labels of the inputs to model dendritic encoders and generate spike trains with such SPDs as firing rates. Feedback connections from the same or higher layers with different numbers of unit-delay devices reflect different signal traveling times, enabling LOM to fully utilize temporally and spatially associated information. Biological plausibility of the component models is discussed. Numerical examples are given to demonstrate how LOM operates in retrieving, generalizing, and unsupervised and supervised learning.  相似文献   

17.
Miller P 《Neural computation》2006,18(6):1268-1317
Attractor networks are likely to underlie working memory and integrator circuits in the brain. It is unknown whether continuous quantities are stored in an analog manner or discretized and stored in a set of discrete attractors. In order to investigate the important issue of how to differentiate the two systems, here we compare the neuronal spiking activity that arises from a continuous (line) attractor with that from a series of discrete attractors. Stochastic fluctuations cause the position of the system along its continuous attractor to vary as a random walk, whereas in a discrete attractor, noise causes spontaneous transitions to occur between discrete states at random intervals. We calculate the statistics of spike trains of neurons firing as a Poisson process with rates that vary according to the underlying attractor network. Since individual neurons fire spikes probabilistically and since the state of the network as a whole drifts randomly, the spike trains of individual neurons follow a doubly stochastic (Poisson) point process. We compare the series of spike trains from the two systems using the autocorrelation function, Fano factor, and interspike interval (ISI) distribution. Although the variation in rate can be dramatically different, especially for short time intervals, surprisingly both the autocorrelation functions and Fano factors are identical, given appropriate scaling of the noise terms. Since the range of firing rates is limited in neurons, we also investigate systems for which the variation in rate is bounded by either rigid limits or because of leak to a single attractor state, such as the Ornstein-Uhlenbeck process. In these cases, the time dependence of the variance in rate can be different between discrete and continuous systems, so that in principle, these processes can be distinguished using second-order spike statistics.  相似文献   

18.
We present a mixed-mode analog/digital VLSI device comprising an array of leaky integrate-and-fire (I&F) neurons, adaptive synapses with spike-timing dependent plasticity, and an asynchronous event based communication infrastructure that allows the user to (re)configure networks of spiking neurons with arbitrary topologies. The asynchronous communication protocol used by the silicon neurons to transmit spikes (events) off-chip and the silicon synapses to receive spikes from the outside is based on the "address-event representation" (AER). We describe the analog circuits designed to implement the silicon neurons and synapses and present experimental data showing the neuron's response properties and the synapses characteristics, in response to AER input spike trains. Our results indicate that these circuits can be used in massively parallel VLSI networks of I&F neurons to simulate real-time complex spike-based learning algorithms.  相似文献   

19.
A supervised learning rule for Spiking Neural Networks (SNNs) is presented that can cope with neurons that spike multiple times. The rule is developed by extending the existing SpikeProp algorithm which could only be used for one spike per neuron. The problem caused by the discontinuity in the spike process is counteracted with a simple but effective rule, which makes the learning process more efficient. Our learning rule is successfully tested on a classification task of Poisson spike trains. We also applied the algorithm on a temporal version of the XOR problem and show that it is possible to learn this classical problem using only one spiking neuron making use of a hair-trigger situation.  相似文献   

20.
We investigate possibilities of inducing temporal structures without fading memory in recurrent networks of spiking neurons strictly operating in the pulse-coding regime. We extend the existing gradient-based algorithm for training feedforward spiking neuron networks, SpikeProp (Bohte, Kok, & La Poutré, 2002), to recurrent network topologies, so that temporal dependencies in the input stream are taken into account. It is shown that temporal structures with unbounded input memory specified by simple Moore machines (MM) can be induced by recurrent spiking neuron networks (RSNN). The networks are able to discover pulse-coded representations of abstract information processing states coding potentially unbounded histories of processed inputs. We show that it is often possible to extract from trained RSNN the target MM by grouping together similar spike trains appearing in the recurrent layer. Even when the target MM was not perfectly induced in a RSNN, the extraction procedure was able to reveal weaknesses of the induced mechanism and the extent to which the target machine had been learned.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号