首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 218 毫秒
1.
Random neural networks with multiple classes of signals   总被引:3,自引:0,他引:3  
By extending the pulsed recurrent random neural network (RNN) discussed in Gelenbe (1989, 1990, 1991), we propose a recurrent random neural network model in which each neuron processes several distinctly characterized streams of "signals" or data. The idea that neurons may be able to distinguish between the pulses they receive and use them in a distinct manner is biologically plausible. In engineering applications, the need to process different streams of information simultaneously is commonplace (e.g., in image processing, sensor fusion, or parallel processing systems). In the model we propose, each distinct stream is a class of signals in the form of spikes. Signals may arrive to a neuron from either the outside world (exogenous signals) or other neurons (endogenous signals). As a function of the signals it has received, a neuron can fire and then send signals of some class to another neuron or to the outside world. We show that the multiple signal class random model with exponential interfiring times, Poisson external signal arrivals, and Markovian signal movements between neurons has product form; this implies that the distribution of its state (i.e., the probability that each neuron of the network is excited) can be computed simply from the solution of a system of 2Cn simultaneous nonlinear equations where C is the number of signal classes and n is the number of neurons. Here we derive the stationary solution for the multiple class model and establish necessary and sufficient conditions for the existence of the stationary solution. The recurrent random neural network model with multiple classes has already been successfully applied to image texture generation (Atalay & Gelenbe, 1992), where multiple signal classes are used to model different colors in the image.  相似文献   

2.
Information encoding and computation with spikes and bursts   总被引:3,自引:0,他引:3  
Neurons compute and communicate by transforming synaptic input patterns into output spike trains. The nature of this transformation depends crucially on the properties of voltage-gated conductances in neuronal membranes. These intrinsic membrane conductances can enable neurons to generate different spike patterns including brief, high-frequency bursts that are commonly observed in a variety of brain regions. Here we examine how the membrane conductances that generate bursts affect neural computation and encoding. We simulated a bursting neuron model driven by random current input signal and superposed noise. We consider two issues: the timing reliability of different spike patterns and the computation performed by the neuron. Statistical analysis of the simulated spike trains shows that the timing of bursts is much more precise than the timing of single spikes. Furthermore, the number of spikes per burst is highly robust to noise. Next we considered the computation performed by the neuron: how different features of the input current are mapped into specific output spike patterns. Dimensional reduction and statistical classification techniques were used to determine the stimulus features triggering different firing patterns. Our main result is that spikes, and bursts of different durations, code for different stimulus features, which can be quantified without a priori assumptions about those features. These findings lead us to propose that the biophysical mechanisms of spike generation enables individual neurons to encode different stimulus features into distinct spike patterns.  相似文献   

3.
时序模式是指其特征空间分布在时间轴上的一种模式,如语音信号,雷达信号等,文中提出了一种改进的递归神经网方法-时间标签递归神经网方法,以此来对时序模式进行分类,克服了传统方法的缺点,取得了较好的分类效果,初步的实验结果不仅证明了时间标签递归神经网方法对时序模式的很好的分类能力,而且证明了时间标签对于时序模式分类的重要性。  相似文献   

4.
Almost all applications of Artificial Neural Networks (ANNs) depend mainly on their memory ability. The characteristics of typical ANN models are fixed connections, with evolved weights, globalized representations, and globalized optimizations, all based on a mathematical approach. This makes those models to be deficient in robustness, efficiency of learning, capacity, anti-jamming between training sets, and correlativity of samples, etc. In this paper, we attempt to address these problems by adopting the characteristics of biological neurons in morphology and signal processing. A hierarchical neural network was designed and realized to implement structure learning and representations based on connected structures. The basic characteristics of this model are localized and random connections, field limitations of neuron fan-in and fan-out, dynamic behavior of neurons, and samples represented through different sub-circuits of neurons specialized into different response patterns. At the end of this paper, some important aspects of error correction, capacity, learning efficiency, and soundness of structural representation are analyzed theoretically. This paper has demonstrated the feasibility and advantages of structure learning and representation. This model can serve as a fundamental element of cognitive systems such as perception and associative memory.  相似文献   

5.
The high-conductance state of cortical networks   总被引:3,自引:0,他引:3  
We studied the dynamics of large networks of spiking neurons with conductance-based (nonlinear) synapses and compared them to networks with current-based (linear) synapses. For systems with sparse and inhibition-dominated recurrent connectivity, weak external inputs induced asynchronous irregular firing at low rates. Membrane potentials fluctuated a few millivolts below threshold, and membrane conductances were increased by a factor 2 to 5 with respect to the resting state. This combination of parameters characterizes the ongoing spiking activity typically recorded in the cortex in vivo. Many aspects of the asynchronous irregular state in conductance-based networks could be sufficiently well characterized with a simple numerical mean field approach. In particular, it correctly predicted an intriguing property of conductance-based networks that does not appear to be shared by current-based models: they exhibit states of low-rate asynchronous irregular activity that persist for some period of time even in the absence of external inputs and without cortical pacemakers. Simulations of larger networks (up to 350,000 neurons) demonstrated that the survival time of self-sustained activity increases exponentially with network size.  相似文献   

6.
Controlling activity in recurrent neural network models of brain regions is essential both to enable effective learning and to reproduce the low activities that exist in some cortical regions such as hippocampal region CA3. Previous studies of sparse, random, recurrent networks constructed with McCulloch-Pitts neurons used probabilistic arguments to set the parameters that control activity. Here, we extend this work by adding an additional, biologically appropriate, parameter to control the magnitude and stability of activity oscillations. The new constant can be considered to be the rest conductance in a shunting model or the threshold when subtractive inhibition is used. This new parameter is critical for large networks run at low activity levels. Importantly, extreme activity fluctuations that act to turn large networks totally on or totally off can now be avoided. We also show how the size of external input activity interacts with this parameter to affect network activity. Then the model based on fixed weights is extended to estimate activities in networks with distributed weights. Because the theory provides accurate control of activity fluctuations, the approach can be used to design a predictable amount of pseudorandomness into deterministic networks. Such nonminimal fluctuations improve learning in simulations trained on the transitive inference problem.  相似文献   

7.
We introduce a recurrent network architecture for modelling a general class of dynamical systems. The network is intended for modelling real-world processes in which empirical measurements of the external and state variables are obtained at discrete time points. The model can learn from multiple temporal patterns, which may evolve on different timescales and be sampled at non-uniform time intervals. We demonstrate the application of the model to a synthetic problem in which target data are only provided at the final time step. Despite the sparseness of the training data, the network is able not only to make good predictions at the final time step for temporal processes unseen in training, but also to reproduce the sequence of the state variables at earlier times. Moreover, we show how the network can infer the existence and role of state variables for which no target information is provided. The ability of the model to cope with sparse data is likely to be useful in a number of applications, including, in particular, the modelling of metal forging.  相似文献   

8.
Salinas E 《Neural computation》2003,15(7):1439-1475
A bright red light may trigger a sudden motor action in a driver crossing an intersection: stepping at once on the brakes. The same red light, however, may be entirely inconsequential if it appears, say, inside a movie theater. Clearly, context determines whether a particular stimulus will trigger a motor response, but what is the neural correlate of this? How does the nervous system enable or disable whole networks so that they are responsive or not to a given sensory signal? Using theoretical models and computer simulations, I show that networks of neurons have a built-in capacity to switch between two types of dynamic state: one in which activity is low and approximately equal for all units, and another in which different activity distributions are possible and may even change dynamically. This property allows whole circuits to be turned on or off by weak, unstructured inputs. These results are illustrated using networks of integrate-and-fire neurons with diverse architectures. In agreement with the analytic calculations, a uniform background input may determine whether a random network has one or two stable firing levels; it may give rise to randomly alternating firing episodes in a circuit with reciprocal inhibition; and it may regulate the capacity of a center-surround circuit to produce either self-sustained activity or traveling waves. Thus, the functional properties of a network may be drastically modified by a simple, weak signal. This mechanism works as long as the network is able to exhibit stable firing states, or attractors.  相似文献   

9.
The goal of this work is to learn and retrieve a sequence of highly correlated patterns using a Hopfield-type of attractor neural network (ANN) with a small-world connectivity distribution. For this model, we propose a weight learning heuristic which combines the pseudo-inverse approach with a row-shifting schema. The influence of the ratio of random connectivity on retrieval quality and learning time has been studied. Our approach has been successfully tested on a complex pattern, as it is the case of traffic video sequences, for different combinations of the involved parameters. Moreover, it has demonstrated to be robust with respect to highly variable frame activity.  相似文献   

10.
We investigate theoretically the conditions for the emergence of synchronous activity in large networks, consisting of two populations of extensively connected neurons, one excitatory and one inhibitory. The neurons are modeled with quadratic integrate-and-fire dynamics, which provide a very good approximation for the subthreshold behavior of a large class of neurons. In addition to their synaptic recurrent inputs, the neurons receive a tonic external input that varies from neuron to neuron. Because of its relative simplicity, this model can be studied analytically. We investigate the stability of the asynchronous state (AS) of the network with given average firing rates of the two populations. First, we show that the AS can remain stable even if the synaptic couplings are strong. Then we investigate the conditions under which this state can be destabilized. We show that this can happen in four generic ways. The first is a saddle-node bifurcation, which leads to another state with different average firing rates. This bifurcation, which occurs for strong enough recurrent excitation, does not correspond to the emergence of synchrony. In contrast, in the three other instability mechanisms, Hopf bifurcations, which correspond to the emergence of oscillatory synchronous activity, occur. We show that these mechanisms can be differentiated by the firing patterns they generate and their dependence on the mutual interactions of the inhibitory neurons and cross talk between the two populations. We also show that besides these codimension 1 bifurcations, the system can display several codimension 2 bifurcations: Takens-Bogdanov, Gavrielov-Guckenheimer, and double Hopf bifurcations.  相似文献   

11.
黄海南  李晓峰  连培昆  荣建 《计算机应用》2018,38(10):3025-3029
针对现有信号机控制逻辑无法响应公交车辆累积数、控制参数敏感性较低等问题,构建公交优先策略触发概率模型用以检测并分析提高触发精度的方法。首先,依托西门子2070信号机,分析其公交优先策略触发原理,进而构建了绿灯延长策略和红灯早断策略的触发概率模型。然后,以实际交叉口为例,通过硬件在环仿真计算并对比不同信号配时方案的触发概率,探索了公交优先策略触发概率的优化方法。研究结果表明:绿灯延长策略的触发概率远低于红灯早断策略;绿灯延长策略的触发概率与绿灯时间阈值成反比,红灯早断策略的触发概率主要与非优先相位申请优先的公交数量相关;可通过优化最小和最大绿灯时间,及增加申请优先的公交数量提高绿灯延长策略的触发概率;可通过先优化固定信号配时再进行公交优先信号设置等措施提高红灯早断策略的触发概率。  相似文献   

12.
A central pattern generator (CPG) model is proposed for the gait-pattern generation mechanism of an autonomous decentralized multi-legged robot system. The topological structure of the CPG is represented as a graph on which two time evolution systems, the Hamilton system and a gradient system, are introduced. The CPG model can generate oscillation patterns depending only on the network topology and can bifurcate different oscillation patterns according to the network energy, which means that the robot can generate gait patterns by connecting legs and transit gait patterns according to such parameters as the desired speed.  相似文献   

13.
本文提出了一种改进的注意力选择模型,在这个模型中,周边神经元代表初级视觉皮层的神经元,中心神经元代表更高级视觉皮层中的神经元.生理实验发现方向选择性是初级视觉皮层神经元的重要特性之一,所以模型除了考虑外部刺激的强度,也考虑了初级视觉皮层中的神经元的方向选择性.仿真结果显示改进后的模型能够选择具有不同方向选择性的目标,并且能从一个目标转移到另一个目标.和原模型相比,改进后的模型更符合生理背景.该模型的动力学分析结果,对于理解视觉神经系统的编码有一定的帮助.  相似文献   

14.
Despite the fact that animals are not optimal, natural selection is an optimizing process that can readily control small bits and pieces of organisms. It is for this reason that we need to explain certain parameters as found in Nature (e.g., number of neurons and their average activity) to fully understand the biological basis of cognition. In this optimizing sense, the failure of quantal synaptic transmission is problematic because this process incurs information loss at each synapse which seems like a bad thing for information processing. However, recent work based on an information-theoretic analysis of a single neuron suggests that such losses can be tolerated and lead to energy savings. Here we study computational simulations of a hippocampal model as a function of failure rate. We find that the failure process actually enhances some indices of performance when the model is required to solve the hippocampally dependent task of transverse patterning or when it is required to learn a simple sequence. Adding the random process of synaptic failures to the recurrent CA3-to-CA3 excitatory connections results in simulations that are more robust to parametric settings. Not only is the model more robust when synaptic failures are part of the model but there is a notable increase of sequence length memory capacity. Also, the failure process combined with additional neurons allows lower activity settings while still remaining compatible with learning the transverse patterning task. Indeed, as neuron number tended towards the biological numbers (nearly 5 x 10(4) in the simulations), it was not only possible to achieve biological failure rates (55-85%) at the minimally tolerated activity setting but these appropriately high failure rates were required for successful learning. The results are interpreted in terms of previous research demonstrating that randomization during training can enhance performance by facilitating implicit state-space search for interconnected neurons.  相似文献   

15.
Learning in the multiple class random neural network   总被引:3,自引:0,他引:3  
Spiked recurrent neural networks with "multiple classes" of signals have been recently introduced by Gelenbe and Fourneau (1999), as an extension of the recurrent spiked random neural network introduced by Gelenbe (1989). These new networks can represent interconnected neurons, which simultaneously process multiple streams of data such as the color information of images, or networks which simultaneously process streams of data from multiple sensors. This paper introduces a learning algorithm which applies both to recurrent and feedforward multiple signal class random neural networks (MCRNNs). It is based on gradient descent optimization of a cost function. The algorithm exploits the analytical properties of the MCRNN and requires the solution of a system of nC linear and nC nonlinear equations (where C is the number of signal classes and n is the number of neurons) each time the network learns a new input-output pair. Thus, the algorithm is of O([nC]/sup 3/) complexity for the recurrent case, and O([nC]/sup 2/) for a feedforward MCRNN. Finally, we apply this learning algorithm to color texture modeling (learning), based on learning the weights of a recurrent network directly from the color texture image. The same trained recurrent network is then used to generate a synthetic texture that imitates the original. This approach is illustrated with various synthetic and natural textures.  相似文献   

16.
Interacting intracellular signalling pathways can perform computations on a scale that is slower, but more fine-grained, than the interactions between neurons upon which we normally build our computational models of the brain (Bray D 1995 Nature 376 307-12). What computations might these potentially powerful intraneuronal mechanisms be performing? The answer suggested here is: storage of spatio-temporal sequences of synaptic excitation so that each individual neuron can recognize recurrent patterns that have excited it in the past. The experimental facts about directionally selective neurons in the visual system show that neurons do not integrate separately in space and time, but along straight spatio-temporal trajectories; thus, neurons have some of the capacities required to perform such a task. In the retina, it is suggested that calcium-induced calcium release (CICR) may provide the basis for directional selectivity. In the cortex, if activation mechanisms with different delays could be separately reinforced at individual synapses, then each such Hebbian super-synapse would store a memory trace of the delay between pre- and post-synaptic activity, forming an ideal basis for the memory and response to phase sequences.  相似文献   

17.
This paper proposes a novel framework to detect cyber-attacks using Machine Learning coupled with User Behavior Analytics. The framework models the user behavior as sequences of events representing the user activities at such a network. The represented sequences are then fitted into a recurrent neural network model to extract features that draw distinctive behavior for individual users. Thus, the model can recognize frequencies of regular behavior to profile the user manner in the network. The subsequent procedure is that the recurrent neural network would detect abnormal behavior by classifying unknown behavior to either regular or irregular behavior. The importance of the proposed framework is due to the increase of cyber-attacks especially when the attack is triggered from such sources inside the network. Typically detecting inside attacks are much more challenging in that the security protocols can barely recognize attacks from trustful resources at the network, including users. Therefore, the user behavior can be extracted and ultimately learned to recognize insightful patterns in which the regular patterns reflect a normal network workflow. In contrast, the irregular patterns can trigger an alert for a potential cyber-attack. The framework has been fully described where the evaluation metrics have also been introduced. The experimental results show that the approach performed better compared to other approaches and AUC 0.97 was achieved using RNN-LSTM 1. The paper has been concluded with providing the potential directions for future improvements.  相似文献   

18.
Romani S  Amit DJ  Amit Y 《Neural computation》2008,20(8):1928-1950
A network of excitatory synapses trained with a conservative version of Hebbian learning is used as a model for recognizing the familiarity of thousands of once-seen stimuli from those never seen before. Such networks were initially proposed for modeling memory retrieval (selective delay activity). We show that the same framework allows the incorporation of both familiarity recognition and memory retrieval, and estimate the network's capacity. In the case of binary neurons, we extend the analysis of Amit and Fusi (1994) to obtain capacity limits based on computations of signal-to-noise ratio of the field difference between selective and non-selective neurons of learned signals. We show that with fast learning (potentiation probability approximately 1), the most recently learned patterns can be retrieved in working memory (selective delay activity). A much higher number of once-seen learned patterns elicit a realistic familiarity signal in the presence of an external field. With potentiation probability much less than 1 (slow learning), memory retrieval disappears, whereas familiarity recognition capacity is maintained at a similarly high level. This analysis is corroborated in simulations. For analog neurons, where such analysis is more difficult, we simplify the capacity analysis by studying the excess number of potentiated synapses above the steady-state distribution. In this framework, we derive the optimal constraint between potentiation and depression probabilities that maximizes the capacity.  相似文献   

19.
Brader JM  Senn W  Fusi S 《Neural computation》2007,19(11):2881-2912
We present a model of spike-driven synaptic plasticity inspired by experimental observations and motivated by the desire to build an electronic hardware device that can learn to classify complex stimuli in a semisupervised fashion. During training, patterns of activity are sequentially imposed on the input neurons, and an additional instructor signal drives the output neurons toward the desired activity. The network is made of integrate-and-fire neurons with constant leak and a floor. The synapses are bistable, and they are modified by the arrival of presynaptic spikes. The sign of the change is determined by both the depolarization and the state of a variable that integrates the postsynaptic action potentials. Following the training phase, the instructor signal is removed, and the output neurons are driven purely by the activity of the input neurons weighted by the plastic synapses. In the absence of stimulation, the synapses preserve their internal state indefinitely. Memories are also very robust to the disruptive action of spontaneous activity. A network of 2000 input neurons is shown to be able to classify correctly a large number (thousands) of highly overlapping patterns (300 classes of preprocessed Latex characters, 30 patterns per class, and a subset of the NIST characters data set) and to generalize with performances that are better than or comparable to those of artificial neural networks. Finally we show that the synaptic dynamics is compatible with many of the experimental observations on the induction of long-term modifications (spike-timing-dependent plasticity and its dependence on both the postsynaptic depolarization and the frequency of pre- and postsynaptic neurons).  相似文献   

20.
A mechanism is proposed by which feedback pathways model spatial patterns of feedforward activity in cortical maps. The mechanism can be viewed equivalently as readout of a content-addressable memory or as decoding of a population code. The model is based on the evidence that cortical receptive fields can often be described as a separable product of functions along several dimensions, each represented in a spatially ordered map. Given this, it is shown that for an N-dimensional map, accurate modeling and decoding of x(N) feedforward activity patterns can be done with Nx fibers, N of which must be active at any one time. The proposed mechanism explains several known properties of the cortex and pyramidal neurons: (1) the integration of signals by dendrites with a narrow tangential distribution, that is, apical dendrites; (2) the presence of fast-conducting feedback projections with broad tangential distributions; (3) the multiplicative effects of attention on receptive field profiles; and (4) the existence of multiplicative interactions between subthreshold feedforward inputs to basal dendrites and inputs to apical dendrites.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号