首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper studies the behavior of recurrent neural networks with lateral inhibition. Such network architecture is important in biological neural systems. General conditions determining the existence, number, and stability of network equilibria are derived. The manner in which these features depend upon steepness of neuronal activation functions and the strength of lateral inhibition is demonstrated for a broad range of nondecreasing activation functions including the discontinuous threshold function which represents the infinite gain limit. For uniform lateral inhibitory networks, the lateral inhibition is shown to sharpen neuron output patterns by increasing separation of suprathreshold activity levels of competing neurons. This results in the tendency of one neuron's output to dominate those of the others which can afford a "winner-take-all" (WTA) mechanism. Importantly, multiple stable equilibria may exist and shifts in inputs levels may yield network state transitions that exhibit hysteresis. A limitation of using lateral inhibition to implement WTA is further demonstrated. The possible significance of these identified network dynamics to physiology and pathophysiology of the striatum (particularly in Parkinsonian rest tremor) is discussed  相似文献   

2.
3.
We define a stochastic neuron as an element that increases its internal state with probability p until a threshold value is reached; after that its internal state is set back to the initial value. We study the local information of a stochastic neuron between the message arriving from the input neurons and the response of the neuron. We study the dependence of the local information on the firing probability alpha of the synaptic inputs in a network of such stochastic neurons. The values of alpha obtained in the simulations are the same as those obtained theoretically by maximization of local mutual information. We conclude that the global dynamics maximizes the local mutual information of single units, which means that the self-selected parameter value of the population dynamics is such that each neuron behaves as an optimal encoder.  相似文献   

4.
We investigate possibilities of inducing temporal structures without fading memory in recurrent networks of spiking neurons strictly operating in the pulse-coding regime. We extend the existing gradient-based algorithm for training feedforward spiking neuron networks, SpikeProp (Bohte, Kok, & La Poutré, 2002), to recurrent network topologies, so that temporal dependencies in the input stream are taken into account. It is shown that temporal structures with unbounded input memory specified by simple Moore machines (MM) can be induced by recurrent spiking neuron networks (RSNN). The networks are able to discover pulse-coded representations of abstract information processing states coding potentially unbounded histories of processed inputs. We show that it is often possible to extract from trained RSNN the target MM by grouping together similar spike trains appearing in the recurrent layer. Even when the target MM was not perfectly induced in a RSNN, the extraction procedure was able to reveal weaknesses of the induced mechanism and the extent to which the target machine had been learned.  相似文献   

5.
A new type of model neuron is introduced as a building block of an associative memory. The neuron, which has a number of receptor zones, processes both the amplitude and the frequency of input signals, associating a small number of features encoded by those signals. Using this two-parameter input in our model compared to the one-dimensional inputs of conventional model neurons (e.g., the McCulloch Pitts neuron) offers an increased memory capacity. In our model, there is a competition among inputs in each zone with a subsequent cooperation of the winners to specify the output. The associative memory consists of a network of such neurons. A state-space model is used to define the neurodynamics. We explore properties of the neuron and the network and demonstrate its favorable capacity and recall capabilities. Finally, the network is used in an application designed to find trademarks that sound alike.  相似文献   

6.
Deep neural networks such as GoogLeNet, ResNet, and BERT have achieved impressive performance in tasks such as image and text classification. To understand how such performance is achieved, we probe a trained deep neural network by studying neuron activations, i.e.combinations of neuron firings, at various layers of the network in response to a particular input. With a large number of inputs, we aim to obtain a global view of what neurons detect by studying their activations. In particular, we develop visualizations that show the shape of the activation space, the organizational principle behind neuron activations, and the relationships of these activations within a layer. Applying tools from topological data analysis, we present TopoAct , a visual exploration system to study topological summaries of activation vectors. We present exploration scenarios using TopoAct that provide valuable insights into learned representations of neural networks. We expect TopoAct to give a topological perspective that enriches the current toolbox of neural network analysis, and to provide a basis for network architecture diagnosis and data anomaly detection.  相似文献   

7.
Rough Neural Computing in Signal Analysis   总被引:4,自引:0,他引:4  
This paper introduces an application of a particular form of rough neural computing in signal analysis. The form of rough neural network used in this study is based on rough sets, rough membership functions, and decision rules. Two forms of neurons are found in such a network: rough membership function neurons and decider neurons. Each rough membership function neuron constructs upper and lower approximation equivalence classes in response to input signals as an aid to classifying inputs. In this paper, the output of a rough membership function neuron results from the computation performed by a rough membership function in determining degree of overlap between an upper approximation set representing approximate knowledge about inputs and a set of measurements representing certain knowledge about a particular class of objects. Decider neurons implement granules derived from decision rules extracted from data sets using rough set theory. A decider neuron instantiates approximate reasoning in assessing rough membership function values gleaned from input data. An introduction to the basic concepts underlying rough membership neural networks is briefly given. An application of rough neural computing in classifying the power system faults is considered.  相似文献   

8.
We present in this paper a general model of recurrent networks of spiking neurons, composed of several populations, and whose interaction pattern is set with a random draw. We use for simplicity discrete time neuron updating, and the emitted spikes are transmitted through randomly delayed lines. In excitatory-inhibitory networks, we show that inhomogeneous delays may favour synchronization provided that the inhibitory delays distribution is significantly stronger than the excitatory one. In that case, slow waves of synchronous activity appear (this synchronous activity is stronger in inhibitory population). This synchrony allows for a fast ada ptivity of the network to various input stimuli. In networks observing the constraint of short range excitation and long range inhibition, we show that under some parameter settings, this model displays properties of –1– dynamic retention –2– input normalization –3– target tracking. Those properties are of interest for modelling biological topologically organized structures, and for robotic applications taking place in noisy environments where targets vary in size, speed and duration. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

9.
Permitted and forbidden sets in symmetric threshold-linear networks   总被引:1,自引:0,他引:1  
The richness and complexity of recurrent cortical circuits is an inexhaustible source of inspiration for thinking about high-level biological computation. In past theoretical studies, constraints on the synaptic connection patterns of threshold-linear networks were found that guaranteed bounded network dynamics, convergence to attractive fixed points, and multistability, all fundamental aspects of cortical information processing. However, these conditions were only sufficient, and it remained unclear which were the minimal (necessary) conditions for convergence and multistability. We show that symmetric threshold-linear networks converge to a set of attractive fixed points if and only if the network matrix is copositive. Furthermore, the set of attractive fixed points is nonconnected (the network is multiattractive) if and only if the network matrix is not positive semidefinite. There are permitted sets of neurons that can be coactive at a stable steady state and forbidden sets that cannot. Permitted sets are clustered in the sense that subsets of permitted sets are permitted and supersets of forbidden sets are forbidden. By viewing permitted sets as memories stored in the synaptic connections, we provide a formulation of long-term memory that is more general than the traditional perspective of fixed-point attractor networks. There is a close correspondence between threshold-linear networks and networks defined by the generalized Lotka-Volterra equations.  相似文献   

10.
A simple expression for a lower bound of Fisher information is derived for a network of recurrently connected spiking neurons that have been driven to a noise-perturbed steady state. We call this lower bound linear Fisher information, as it corresponds to the Fisher information that can be recovered by a locally optimal linear estimator. Unlike recent similar calculations, the approach used here includes the effects of nonlinear gain functions and correlated input noise and yields a surprisingly simple and intuitive expression that offers substantial insight into the sources of information degradation across successive layers of a neural network. Here, this expression is used to (1) compute the optimal (i.e., information-maximizing) firing rate of a neuron, (2) demonstrate why sharpening tuning curves by either thresholding or the action of recurrent connectivity is generally a bad idea, (3) show how a single cortical expansion is sufficient to instantiate a redundant population code that can propagate across multiple cortical layers with minimal information loss, and (4) show that optimal recurrent connectivity strongly depends on the covariance structure of the inputs to the network.  相似文献   

11.
Artificial neural networks (ANNs) are one of the hottest topics in computer science and artificial intelligence due to their potential and advantages in analyzing real-world problems in various disciplines, including but not limited to physics, biology, chemistry, and engineering. However, ANNs lack several key characteristics of biological neural networks, such as sparsity, scale-freeness, and small-worldness. The concept of sparse and scale-free neural networks has been introduced to fill this gap. Network sparsity is implemented by removing weak weights between neurons during the learning process and replacing them with random weights. When the network is initialized, the neural network is fully connected, which means the number of weights is four times the number of neurons. In this study, considering that a biological neural network has some degree of initial sparsity, we design an ANN with a prescribed level of initial sparsity. The neural network is tested on handwritten digits, Arabic characters, CIFAR-10, and Reuters newswire topics. Simulations show that it is possible to reduce the number of weights by up to 50% without losing prediction accuracy. Moreover, in both cases, the testing time is dramatically reduced compared with fully connected ANNs.  相似文献   

12.
Salinas E 《Neural computation》2003,15(7):1439-1475
A bright red light may trigger a sudden motor action in a driver crossing an intersection: stepping at once on the brakes. The same red light, however, may be entirely inconsequential if it appears, say, inside a movie theater. Clearly, context determines whether a particular stimulus will trigger a motor response, but what is the neural correlate of this? How does the nervous system enable or disable whole networks so that they are responsive or not to a given sensory signal? Using theoretical models and computer simulations, I show that networks of neurons have a built-in capacity to switch between two types of dynamic state: one in which activity is low and approximately equal for all units, and another in which different activity distributions are possible and may even change dynamically. This property allows whole circuits to be turned on or off by weak, unstructured inputs. These results are illustrated using networks of integrate-and-fire neurons with diverse architectures. In agreement with the analytic calculations, a uniform background input may determine whether a random network has one or two stable firing levels; it may give rise to randomly alternating firing episodes in a circuit with reciprocal inhibition; and it may regulate the capacity of a center-surround circuit to produce either self-sustained activity or traveling waves. Thus, the functional properties of a network may be drastically modified by a simple, weak signal. This mechanism works as long as the network is able to exhibit stable firing states, or attractors.  相似文献   

13.
We present a new technique, based on a proposed event-based strategy (Mattia & Del Giudice, 2000), for efficiently simulating large networks of simple model neurons. The strategy was based on the fact that interactions among neurons occur by means of events that are well localized in time (the action potentials) and relatively rare. In the interval between two of these events, the state variables associated with a model neuron or a synapse evolved deterministically and in a predictable way. Here, we extend the event-driven simulation strategy to the case in which the dynamics of the state variables in the inter-event intervals are stochastic. This extension captures both the situation in which the simulated neurons are inherently noisy and the case in which they are embedded in a very large network and receive a huge number of random synaptic inputs. We show how to effectively include the impact of large background populations into neuronal dynamics by means of the numerical evaluation of the statistical properties of single-model neurons under random current injection. The new simulation strategy allows the study of networks of interacting neurons with an arbitrary number of external afferents and inherent stochastic dynamics.  相似文献   

14.
基于可视化的方式理解深度神经网络能直观地揭示其工作机理,即提供了黑盒模型做出决策的解释,在医疗诊断、自动驾驶等领域尤其重要。大部分现有工作均基于激活值最大化框架,即选定待观测神经元,通过优化输入值(如隐藏层特征图谱、原始图片),定性地将待观测神经元产生最大激活值时输入值的改变作为一种解释。然而,这种方法缺乏对深度神经网络深入的定量分析。文中提出了结构可视化和基于规则可视化两种可视化的元方法。结构可视化从浅至深依层可视化,发现浅层神经元具有一般性的全局特征,而深层神经元更针对细节特征。基于规则可视化包括交集与差集规则,可以帮助发现共享神经元与抑制神经元的存在,它们分别学习了不同类别的共有特征与抑制不相关的特征。实验针对代表性卷积网络VGG和残差网络ResNet在ImageNet和微软COCO数据集上进行了分析。通过量化分析发现,ResNet和VGG均有很高的稀疏性,通过屏蔽一些低激活值的"噪音"神经元,发现其对深度神经网络分类准确率均没有影响,甚至有一定程度的提高作用。文中通过可视化和量化分析深度神经网络的隐藏层特征,揭示其内部特征表达,从而为高性能深度神经网络的设计提供指导和借鉴。  相似文献   

15.
Weight shifting techniques for self-recovery neural networks   总被引:1,自引:0,他引:1  
In this paper, a self-recovery technique of feedforward neural networks called weight shifting and its analytical models are proposed. The technique is applied to recover a network when some faulty links and/or neurons occur during the operation. If some input links of a specific neuron are detected faulty, their weights will be shifted to healthy links of the same neuron. On the other hand, if a faulty neuron is encountered, then we can treat it as a special case of faulty links by considering all the output links of that neuron to be faulty. The aim of this technique is to recover the network in a short time without any retraining and hardware repair. We also propose the hardware architecture for implementing this technique.  相似文献   

16.
We investigate theoretically the conditions for the emergence of synchronous activity in large networks, consisting of two populations of extensively connected neurons, one excitatory and one inhibitory. The neurons are modeled with quadratic integrate-and-fire dynamics, which provide a very good approximation for the subthreshold behavior of a large class of neurons. In addition to their synaptic recurrent inputs, the neurons receive a tonic external input that varies from neuron to neuron. Because of its relative simplicity, this model can be studied analytically. We investigate the stability of the asynchronous state (AS) of the network with given average firing rates of the two populations. First, we show that the AS can remain stable even if the synaptic couplings are strong. Then we investigate the conditions under which this state can be destabilized. We show that this can happen in four generic ways. The first is a saddle-node bifurcation, which leads to another state with different average firing rates. This bifurcation, which occurs for strong enough recurrent excitation, does not correspond to the emergence of synchrony. In contrast, in the three other instability mechanisms, Hopf bifurcations, which correspond to the emergence of oscillatory synchronous activity, occur. We show that these mechanisms can be differentiated by the firing patterns they generate and their dependence on the mutual interactions of the inhibitory neurons and cross talk between the two populations. We also show that besides these codimension 1 bifurcations, the system can display several codimension 2 bifurcations: Takens-Bogdanov, Gavrielov-Guckenheimer, and double Hopf bifurcations.  相似文献   

17.
We investigate the firing characteristics of conductance-based integrate-and-fire neurons and the correlation of firing for uncoupled pairs of neurons as a result of common input and synchronous firing of multiple synaptic inputs. Analytical approximations are derived for the moments of the steady state potential and the effective time constant. We show that postsynaptic firing barely depends on the correlation between inhibitory inputs; only the inhibitory firing rate matters. In contrast, both the degree of synchrony and the firing rate of excitatory inputs are relevant. A coefficient of variation CV > 1 can be attained with low inhibitory firing rates and (Poisson-modulated) synchronized excitatory synaptic input, where both the number of presynaptic neurons in synchronous firing assemblies and the synchronous firing rate should be sufficiently large. The correlation in firing of a pair of uncoupled neurons due to common excitatory input is initially increased for increasing firing rates of independent inhibitory inputs but decreases for large inhibitory firing rates. Common inhibitory input to a pair of uncoupled neurons barely induces correlated firing, but amplifies the effect of common excitation. Synchronous firing assemblies in the common input further enhance the correlation and are essential to attain experimentally observed correlation values. Since uncorrelated common input (i.e., common input by neurons, which do not fire in synchrony) cannot induce sufficient postsynaptic correlation, we conclude that lateral couplings are essential to establish clusters of synchronously firing neurons.  相似文献   

18.
多聚合过程神经元网络及其学习算法研究   总被引:2,自引:0,他引:2  
针对系统输入为多元过程函数以及多维过程信号的信息处理问题,提出了多聚合过程神经元和多聚合过程神经元网络模型.多聚合过程神经元的输入和连接权均可以是多元过程函数,其聚合运算包括对多个输入函数的空间加权聚集和对多维过程效应的累积,可同时反映多个多元过程输入信号在多维空间上的共同作用影响以及过程效应的累积结果.多聚合过程神经元网络是由多聚合过程神经元和其它类型的神经元按照一定的结构关系组成的网络模型,按照输出是否为多元过程函数建立了前馈多聚合过程神经元网络的一般模型和输入输出均为过程函数的多聚合过程神经元网络模型,具有对多元过程信号输入输出关系的直接映射和建模能力.文中给出了一种基于多元函数基展开的梯度下降与数值计算相结合的学习算法,仿真实验结果表明了模型和算法对多元过程信号分类和多维动态过程模拟问题的适应性.  相似文献   

19.
Dynamics analysis and analog associative memory of networks with LT neurons   总被引:1,自引:0,他引:1  
The additive recurrent network structure of linear threshold neurons represents a class of biologically-motivated models, where nonsaturating transfer functions are necessary for representing neuronal activities, such as that of cortical neurons. This paper extends the existing results of dynamics analysis of such linear threshold networks by establishing new and milder conditions for boundedness and asymptotical stability, while allowing for multistability. As a condition for asymptotical stability, it is found that boundedness does not require a deterministic matrix to be symmetric or possess positive off-diagonal entries. The conditions put forward an explicit way to design and analyze such networks. Based on the established theory, an alternate approach to study such networks is through permitted and forbidden sets. An application of the linear threshold (LT) network is analog associative memory, for which a simple design method describing the associative memory is suggested in this paper. The proposed design method is similar to a generalized Hebbian approach, but with distinctions of additional network parameters for normalization, excitation and inhibition, both on a global and local scale. The computational abilities of the network are dependent on its nonlinear dynamics, which in turn is reliant upon the sparsity of the memory vectors.  相似文献   

20.
In this paper, we observe some important aspects of Hebbian and error-correction learning rules for complex-valued neurons. These learning rules, which were previously considered for the multi-valued neuron (MVN) whose inputs and output are located on the unit circle, are generalized for a complex-valued neuron whose inputs and output are arbitrary complex numbers. The Hebbian learning rule is also considered for the MVN with a periodic activation function. It is experimentally shown that Hebbian weights, even if they still cannot implement an input/output mapping to be learned, are better starting weights for the error-correction learning, which converges faster starting from the Hebbian weights rather than from the random ones.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号