首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A simple expression for a lower bound of Fisher information is derived for a network of recurrently connected spiking neurons that have been driven to a noise-perturbed steady state. We call this lower bound linear Fisher information, as it corresponds to the Fisher information that can be recovered by a locally optimal linear estimator. Unlike recent similar calculations, the approach used here includes the effects of nonlinear gain functions and correlated input noise and yields a surprisingly simple and intuitive expression that offers substantial insight into the sources of information degradation across successive layers of a neural network. Here, this expression is used to (1) compute the optimal (i.e., information-maximizing) firing rate of a neuron, (2) demonstrate why sharpening tuning curves by either thresholding or the action of recurrent connectivity is generally a bad idea, (3) show how a single cortical expansion is sufficient to instantiate a redundant population code that can propagate across multiple cortical layers with minimal information loss, and (4) show that optimal recurrent connectivity strongly depends on the covariance structure of the inputs to the network.  相似文献   

2.
The precision of the neural code is commonly investigated using two families of statistical measures: Shannon mutual information and derived quantities when investigating very small populations of neurons and Fisher information when studying large populations. These statistical tools are no longer the preserve of theorists and are being applied by experimental research groups in the analysis of empirical data. Although the relationship between information-theoretic and Fisher-based measures in the limit of infinite populations is relatively well understood, how these measures compare in finite-size populations has not yet been systematically explored. We aim to close this gap. We are particularly interested in understanding which stimuli are best encoded by a given neuron within a population and how this depends on the chosen measure. We use a novel Monte Carlo approach to compute a stimulus-specific decomposition of the mutual information (the SSI) for populations of up to 256 neurons and show that Fisher information can be used to accurately estimate both mutual information and SSI for populations of the order of 100 neurons, even in the presence of biologically realistic variability, noise correlations, and experimentally relevant integration times. According to both measures, the stimuli that are best encoded are those falling at the flanks of the neuron's tuning curve. In populations of fewer than around 50 neurons, however, Fisher information can be misleading.  相似文献   

3.
In traditional event-driven strategies, spike timings are analytically given or calculated with arbitrary precision (up to machine precision). Exact computation is possible only for simplified neuron models, mainly the leaky integrate-and-fire model. In a recent paper, Zheng, Tonnelier, and Martinez (2009) introduced an approximate event-driven strategy, named voltage stepping, that allows the generic simulation of nonlinear spiking neurons. Promising results were achieved in the simulation of single quadratic integrate-and-fire neurons. Here, we assess the performance of voltage stepping in network simulations by considering more complex neurons (quadratic integrate-and-fire neurons with adaptation) coupled with multiple synapses. To handle the discrete nature of synaptic interactions, we recast voltage stepping in a general framework, the discrete event system specification. The efficiency of the method is assessed through simulations and comparisons with a modified time-stepping scheme of the Runge-Kutta type. We demonstrated numerically that the original order of voltage stepping is preserved when simulating connected spiking neurons, independent of the network activity and connectivity.  相似文献   

4.
Stochastic resonance (SR) is known as a phenomenon in which the presence of noise helps a nonlinear system in amplifying a weak (under barrier) signal. In this paper, we investigate how SR behavior can be observed in practical autoassociative neural networks with the Hopfield-type memory under the stochastic dynamics. We focus on SR responses in two systems which consist of three and 156 neurons. These cases are considered as effective double-well and multi-well models. It is demonstrated that the neural network can enhance weak subthreshold signals composed of the stored pattern trains and have higher coherence abilities between stimulus and response.  相似文献   

5.
Very large networks of spiking neurons can be simulated efficiently in parallel under the constraint that spike times are bound to an equidistant time grid. Within this scheme, the subthreshold dynamics of a wide class of integrate-and-fire-type neuron models can be integrated exactly from one grid point to the next. However, the loss in accuracy caused by restricting spike times to the grid can have undesirable consequences, which has led to interest in interpolating spike times between the grid points to retrieve an adequate representation of network dynamics. We demonstrate that the exact integration scheme can be combined naturally with off-grid spike events found by interpolation. We show that by exploiting the existence of a minimal synaptic propagation delay, the need for a central event queue is removed, so that the precision of event-driven simulation on the level of single neurons is combined with the efficiency of time-driven global scheduling. Further, for neuron models with linear subthreshold dynamics, even local event queuing can be avoided, resulting in much greater efficiency on the single-neuron level. These ideas are exemplified by two implementations of a widely used neuron model. We present a measure for the efficiency of network simulations in terms of their integration error and show that for a wide range of input spike rates, the novel techniques we present are both more accurate and faster than standard techniques.  相似文献   

6.
Neuronal adaptation as well as interdischarge interval correlations have been shown to be functionally important properties of physiological neurons. We explore the dynamics of a modified leaky integrate-and-fire (LIF) neuron, referred to as the LIF with threshold fatigue, and show that it reproduces these properties. In this model, the postdischarge threshold reset depends on the preceding sequence of discharge times. We show that in response to various classes of stimuli, namely, constant currents, step currents, white gaussian noise, and sinusoidal currents, the model exhibits new behavior compared with the standard LIF neuron. More precisely, (1) step currents lead to adaptation, that is, a progressive decrease of the discharge rate following the stimulus onset, while in the standard LIF, no such patterns are possible; (2) a saturation in the firing rate occurs in certain regimes, a behavior not seen in the LIF neuron; (3) interspike intervals of the noise-driven modified LIF under constant current are correlated in a way reminiscent of experimental observations, while those of the standard LIF are independent of one another; (4) the magnitude of the correlation coefficients decreases as a function of noise intensity; and (5) the dynamics of the sinusoidally forced modified LIF are described by iterates of an annulus map, an extension to the circle map dynamics displayed by the LIF model. Under certain conditions, this map can give rise to sensitivity to initial conditions and thus chaotic behavior.  相似文献   

7.
王友国  董洪程  刘健 《计算机应用》2016,36(8):2192-2196
针对数字通信系统中噪声影响码元传输的问题,为提高系统的可靠性,降低接收信号的误码率(BER),提出一种基于最佳匹配方法和并行阵列理论的随机共振(SR)系统。首先,利用并行阵列理论来增强单个双稳态系统的随机共振效果;其次,将最佳匹配随机共振微弱信号的检测方法运用到阵列系统中;最后,推导出最佳匹配阵列随机共振系统的信噪比(SNR)增益表达式,并分析阵列单元数对误码率的影响。理论分析和实验仿真表明,最佳匹配阵列随机共振系统相比单个随机共振系统在强噪声背景下对微弱数字信号的检测性能得到提升,系统输出信噪比增益显著大于1,误码率也得到明显降低;且随着阵列单元数增加,阵列系统的随机共振效果越好。实验结果表明,最佳匹配阵列随机共振系统在实际工程中能够有效提高数字通信系统的可靠性。  相似文献   

8.
在分析了单精度倒数算法在图形处理器中存在的不足的基础上,设计了一阶泰勒级数单精度倒数算法。与传统算法相比,在资源消耗、运算周期和效率方面得到了有效改善。本浮点倒数算法的主要逻辑模块由一个24位整数加法器、一个ROM和一个24位乘法器组成。将在[1,2)范围的尾数平均分为4 096个区间,将每个区间起始点倒数平方放入查找表,并对每个区间采用一阶泰勒级数计算倒数值。仿真结果表明:仿真的结果与理论结果一致,满足单精度浮点数的精度要求。目前此算法已经成功流片,应用于国产第三代图形处理器JM7200。  相似文献   

9.
A widely used signal processing paradigm is the state-space model. The state-space model is defined by two equations: an observation equation that describes how the hidden state or latent process is observed and a state equation that defines the evolution of the process through time. Inspired by neurophysiology experiments in which neural spiking activity is induced by an implicit (latent) stimulus, we develop an algorithm to estimate a state-space model observed through point process measurements. We represent the latent process modulating the neural spiking activity as a gaussian autoregressive model driven by an external stimulus. Given the latent process, neural spiking activity is characterized as a general point process defined by its conditional intensity function. We develop an approximate expectation-maximization (EM) algorithm to estimate the unobservable state-space process, its parameters, and the parameters of the point process. The EM algorithm combines a point process recursive nonlinear filter algorithm, the fixed interval smoothing algorithm, and the state-space covariance algorithm to compute the complete data log likelihood efficiently. We use a Kolmogorov-Smirnov test based on the time-rescaling theorem to evaluate agreement between the model and point process data. We illustrate the model with two simulated data examples: an ensemble of Poisson neurons driven by a common stimulus and a single neuron whose conditional intensity function is approximated as a local Bernoulli process.  相似文献   

10.
It is well known that information processing in the brain depends on neuron systems. Simple neuron systems are neural networks, and their learning methods have been studied. However, we believe that research on large-scale neural network systems is still incomplete. Here, we propose a learning method for millions of neurons as resources for a neuron computer. The method is a type of recurrent path-selection, so the neural network objective must have nesting structures. This method is executed at high speed. When information processing is executed by analogue signals, the accumulation of errors is a grave problem. We equipped a neural network with a digitizer and AD/DA (Analogue Digital) converters constructed of neurons. They retain all information signals and guarantee precision in complex operations. By using these techniques, we generated an image shifter constructed of 8.6 million neurons. We believe that there is the potential to design a neuron computer using this scheme. This work was presented in part at the Fifth International Symposium on Artificial Life and Robotics, Oita, Japan, January 26–28, 2000  相似文献   

11.
Discrimination with Spike Times and ISI Distributions   总被引:1,自引:0,他引:1  
Kang K  Amari S 《Neural computation》2008,20(6):1411-1426
We study the discrimination capability of spike time sequences using the Chernoff distance as a metric. We assume that spike sequences are generated by renewal processes and study how the Chernoff distance depends on the shape of interspike interval (ISI) distribution. First, we consider a lower bound to the Chernoff distance because it has a simple closed form. Then we consider specific models of ISI distributions such as the gamma, inverse gaussian (IG), exponential with refractory period (ER), and that of the leaky integrate-and-fire (LIF) neuron. We found that the discrimination capability of spike times strongly depends on high-order moments of ISI and that it is higher when the spike time sequence has a larger skewness and a smaller kurtosis. High variability in terms of coefficient of variation (CV) does not necessarily mean that the spike times have less discrimination capability. Spike sequences generated by the gamma distribution have the minimum discrimination capability for a given mean and variance of ISI. We used series expansions to calculate the mean and variance of ISIs for LIF neurons as a function of the mean input level and the input noise variance. Spike sequences from an LIF neuron are more capable of discrimination than those of IG and gamma distributions when the stationary voltage level is close to the neuron's threshold value of the neuron.  相似文献   

12.
蒙西    乔俊飞    李文静   《智能系统学报》2018,13(3):331-338
针对径向基函数(radial basis function,RBF)神经网络隐含层结构难以确定的问题,提出一种基于快速密度聚类的网络结构设计算法。该算法将快速密度聚类算法良好的聚类特性用于RBF神经网络结构设计中,通过寻找密度最大的点并将其作为隐含层神经元,进而确定隐含层神经元个数和初始参数;同时,引入高斯函数的特性,保证了每个隐含层神经元的活性;最后,用一种改进的二阶算法对神经网络进行训练,提高了神经网络的收敛速度和泛化能力。利用典型非线性函数逼近和非线性动态系统辨识实验进行仿真验证,结果表明,基于快速密度聚类设计的RBF神经网络具有紧凑的网络结构、快速的学习能力和良好的泛化能力。  相似文献   

13.
本文提出一种可对任意分布的浮点数进行排序的快速排序方法,它基于浮点数的机内编码,具有速度快、实现简单、实用的特点。其时间复杂度为O(n),在对不同分布的随机浮点数进行的排序实验中,其速度是快速排序的数倍。同时,本算法思想还可用于双精度数、整数、字符串等类型数据的排序。  相似文献   

14.
Graphics Processing Units (GPUs), originally developed for computer games, now provide computational power for scientific applications. In this paper, we develop a general purpose Lattice Boltzmann code that runs entirely on a single GPU. The results show that: (1) simple precision floating point arithmetic is sufficient for LBM computation in comparison to double precision; (2) the implementation of LBM on GPUs allows us to achieve up to about one billion lattice update per second using single precision floating point; (3) GPUs provide an inexpensive alternative to large clusters for fluid dynamics prediction.  相似文献   

15.
For classification applications, the role of hidden layer neurons of a radial basis function (RBF) neural network can be interpreted as a function which maps input patterns from a nonlinear separable space to a linear separable space. In the new space, the responses of the hidden layer neurons form new feature vectors. The discriminative power is then determined by RBF centers. In the present study, we propose to choose RBF centers based on Fisher ratio class separability measure with the objective of achieving maximum discriminative power. We implement this idea using a multistep procedure that combines Fisher ratio, an orthogonal transform, and a forward selection search method. Our motivation of employing the orthogonal transform is to decouple the correlations among the responses of the hidden layer neurons so that the class separability provided by individual RBF neurons can be evaluated independently. The strengths of our method are double fold. First, our method selects a parsimonious network architecture. Second, this method selects centers that provide large class separation.  相似文献   

16.
A neuromorphic depth-from-motion vision model with STDP adaptation   总被引:1,自引:0,他引:1  
We propose a simplified depth-from-motion vision model based on leaky integrate-and-fire (LIF) neurons for edge detection and two-dimensional depth recovery. In the model, every LIF neuron is able to detect the irradiance edges passing through its receptive field in an optical flow field, and respond to the detection by firing a spike when the neuron's firing criterion is satisfied. If a neuron fires a spike, the time-of-travel of the spike-associated edge is transferred as the prediction information to the next synapse-linked neuron to determine its state. Correlations between input spikes and their timing thus encode depth in the visual field. The adaptation of synapses mediated by spike-timing-dependent plasticity is used to improve the algorithm's robustness against inaccuracy caused by spurious edge propagation. The algorithm is characterized on both artificial and real image sequences. The implementation of the algorithm in analog very large scale integrated (aVLSI) circuitry is also discussed.  相似文献   

17.
In this paper, analog circuit designs for implementations of Gibbs samplers are presented, which offer fully parallel computation. The Gibbs sampler for a discrete solution space (or Boltzmann machine) can be used to solve both deterministic and probabilistic assignment (association) problems. The primary drawback to the use of a Boltzmann machine for optimization is its computational complexity, since updating of the neurons is typically performed sequentially. We first consider the diffusion equation emulation of a Boltzmann machine introduced by Roysam and Miller (1991), which employs a parallel network of nonlinear amplifiers. It is shown that an analog circuit implementation of the diffusion equation requires a complex neural structure incorporating matched nonlinear feedback amplifiers and current multipliers. We introduce a simpler implementation of the Boltzmann machine, using a "constant gradient" diffusion equation, which eliminates the need for a matched feedback amplifier. The performance of the Roysam and Miller network and the new constant gradient (CG) network is compared using simulations for the multiple-neuron case, and integration of the Chapman-Kolmogorov equation for a single neuron. Based on the simulation results, heuristic criteria for establishing the diffusion equation boundaries, and neuron sigmoidal gain are obtained. The final CG analog circuit is suitable for VLSI implementation, and hence may offer rapid convergence.  相似文献   

18.
张伟 《图学学报》2014,35(2):188
基于自组织特征映射神经网络构建的三角形网格模型可以实现测量点云 压缩后的Delaunay 三角逼近剖分,但该模型存在逼近误差和边缘误差。为减小三角形网格 的逼近误差和边缘误差,构建了精确逼近的三角形网格模型。首先采用整个测量点云,对三 角形网格模型中的所有神经元进行整体训练;然后对三角形网格中的网格神经元的位置权 重,沿网格顶点法矢方向进行修正;最后采用测量点云中的边界点集,对三角形网格模型中 的网格边界神经元进行训练。算例表明,应用该模型,可以有效减小三角形网格的边缘误差, 三角形网格逼近散乱点云的逼近精度得到大幅提高并覆盖散乱点云整体分布范围。  相似文献   

19.
为了实现不同数制的乘法共享硬件资源,提出了一种可以实现基于IEEE754标准的64位双精度浮点与32位单精度浮点、32位整数和16位定点的多功能阵列乘法器的设计方法。采用超前进位加法和流水线技术实现乘法器性能的提高。设计了与TMS320C6701乘法指令兼容的乘法单元,仿真结果验证了设计方案的正确性。  相似文献   

20.
This article studies the computational power of various discontinuous real computational models that are based on the classical analog recurrent neural network (ARNN). This ARNN consists of finite number of neurons; each neuron computes a polynomial net function and a sigmoid-like continuous activation function. We introduce arithmetic networks as ARNN augmented with a few simple discontinuous (e.g., threshold or zero test) neurons. We argue that even with weights restricted to polynomial time computable reals, arithmetic networks are able to compute arbitrarily complex recursive functions. We identify many types of neural networks that are at least as powerful as arithmetic nets, some of which are not in fact discontinuous, but they boost other arithmetic operations in the net function (e.g., neurons that can use divisions and polynomial net functions inside sigmoid-like continuous activation functions). These arithmetic networks are equivalent to the Blum-Shub-Smale model, when the latter is restricted to a bounded number of registers. With respect to implementation on digital computers, we show that arithmetic networks with rational weights can be simulated with exponential precision, but even with polynomial-time computable real weights, arithmetic networks are not subject to any fixed precision bounds. This is in contrast with the ARNN that are known to demand precision that is linear in the computation time. When nontrivial periodic functions (e.g., fractional part, sine, tangent) are added to arithmetic networks, the resulting networks are computationally equivalent to a massively parallel machine. Thus, these highly discontinuous networks can solve the presumably intractable class of PSPACE-complete problems in polynomial time.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号