首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
《Information Fusion》2007,8(3):227-251
This paper presents a new approach to higher-level information fusion in which knowledge and data are represented using semantic networks composed of coupled spiking neuron nodes. Networks of simulated spiking neurons have been shown to exhibit synchronization, in which sub-assemblies of nodes become phase locked to one another. This phase locking reflects the tendency of biological neural systems to produce synchronized neural assemblies, which have been hypothesized to be involved in binding of low-level features in the perception of objects. The approach presented in this paper embeds spiking neurons in a semantic network, in which a synchronized sub-assembly of nodes represents a hypothesis about a situation. Likewise, multiple synchronized assemblies that are out-of-phase with one another represent multiple hypotheses. The initial network is hand-coded, but additional semantic relationships can be established by associative learning mechanisms. This approach is demonstrated by simulation of proof-of-concept scenarios involving the tracking of suspected criminal vehicles between meeting places in an urban environment. Our results indicate that synchronized sub-assemblies of spiking nodes can be used to represent multiple simultaneous events occurring in the environment and to effectively learn new relationships between semantic items in response to these events. In contrast to models of synchronized spiking networks that use physiologically realistic parameters in order to explain limits in human short-term memory (STM) capacity, our networks are not subject to the same limitations in representational capacity for multiple simultaneous events. Simulations demonstrate that the representational capacity of our networks can be very large, but as more simultaneous events are represented by synchronized sub-assemblies, the effective learning rate for establishing new relationships decreases. We propose that this effect could be countered by speeding up the spiking dynamics of the networks (a tactic of limited availability to biological systems). Such a speedup would allow the number of simultaneous events to increase without compromising the learning rate.  相似文献   

2.
How delays affect neural dynamics and learning   总被引:14,自引:0,他引:14  
We investigate the effects of delays on the dynamics and, in particular, on the oscillatory properties of simple neural network models. We extend previously known results regarding the effects of delays on stability and convergence properties. We treat in detail the case of ring networks for which we derive simple conditions for oscillating behavior and several formulas to predict the regions of bifurcation, the periods of the limit cycles and the phases of the different neurons. These results in turn can readily be applied to more complex and more biologically motivated architectures, such as layered networks. In general, the main result is that delays tend to increase the period of oscillations and broaden the spectrum of possible frequencies, in a quantifiable way. Simulations show that the theoretically predicted values are in excellent agreement with the numerically observed behavior. Adaptable delays are then proposed as one additional mechanism through which neural systems could tailor their own dynamics. Accordingly, we derive recurrent backpropagation learning formulas for the adjustment of delays and other parameters in networks with delayed interactions and discuss some possible applications.  相似文献   

3.
In this paper, a synchronization problem is investigated for an array of coupled complex discrete-time networks with the simultaneous presence of both the discrete and distributed time delays. The complex networks addressed which include neural and social networks as special cases are quite general. Rather than the commonly used Lipschitz-type function, a more general sector-like nonlinear function is employed to describe the nonlinearities existing in the network. The distributed infinite time delays in the discrete-time domain are first defined. By utilizing a novel Lyapunov–Krasovskii functional and the Kronecker product, it is shown that the addressed discrete-time complex network with distributed delays is synchronized if certain linear matrix inequalities (LMIs) are feasible. The state estimation problem is then studied for the same complex network, where the purpose is to design a state estimator to estimate the network states through available output measurements such that, for all admissible discrete and distributed delays, the dynamics of the estimation error is guaranteed to be globally asymptotically stable. Again, an LMI approach is developed for the state estimation problem. Two simulation examples are provided to show the usefulness of the proposed global synchronization and state estimation conditions. It is worth pointing out that our main results are valid even if the nominal subsystems within the network are unstable.   相似文献   

4.
This paper is concerned with the robust synchronization problem for an array of coupled stochastic discrete-time neural networks with time-varying delay. The individual neural network is subject to parameter uncertainty, stochastic disturbance, and time-varying delay, where the norm-bounded parameter uncertainties exist in both the state and weight matrices, the stochastic disturbance is in the form of a scalar Wiener process, and the time delay enters into the activation function. For the array of coupled neural networks, the constant coupling and delayed coupling are simultaneously considered. We aim to establish easy-to-verify conditions under which the addressed neural networks are synchronized. By using the Kronecker product as an effective tool, a linear matrix inequality (LMI) approach is developed to derive several sufficient criteria ensuring the coupled delayed neural networks to be globally, robustly, exponentially synchronized in the mean square. The LMI-based conditions obtained are dependent not only on the lower bound but also on the upper bound of the time-varying delay, and can be solved efficiently via the Matlab LMI Toolbox. Two numerical examples are given to demonstrate the usefulness of the proposed synchronization scheme.   相似文献   

5.
Karsten  Andreas  Bernd  Ana D.  Thomas 《Neurocomputing》2008,71(7-9):1694-1704
Biologically plausible excitatory neural networks develop a persistent synchronized pattern of activity depending on spontaneous activity and synaptic refractoriness (short term depression). By fixed synaptic weights synchronous bursts of oscillatory activity are stable and involve the whole network. In our modeling study we investigate the effect of a dynamic Hebbian-like learning mechanism, spike-timing-dependent plasticity (STDP), on the changes of synaptic weights depending on synchronous activity and network connection strategies (small-world topology). We show that STDP modifies the weights of synaptic connections in such a way that synchronization of neuronal activity is considerably weakened. Networks with a higher proportion of long connections can sustain a higher level of synchronization in spite of STDP influence. The resulting distribution of the synaptic weights in single neurons depends both on the global statistics of firing dynamics and on the number of incoming and outgoing connections.  相似文献   

6.
In the past decade the importance of synchronized dynamics in the brain has emerged from both empirical and theoretical perspectives. Fast dynamic synchronous interactions of an oscillatory or nonoscillatory nature may constitute a form of temporal coding that underlies feature binding and perceptual synthesis. The relationship between synchronization among neuronal populations and the population firing rates addresses two important issues: the distinction between rate coding and synchronization coding models of neuronal interactions and the degree to which empirical measurements of population activity, such as those employed by neuroimaging, are sensitive to changes in synchronization. We examined the relationship between mean population activity and synchronization using biologically plausible simulations. In this article, we focus on continuous stationary dynamics. (In a companion article, Chawla (forthcoming), we address the same issue using stimulus-evoked transients.) By manipulation parameters such as extrinsic input, intrinsic noise, synaptic efficacy, density of extrinsic connections, the voltage-sensitive nature of postsynaptic mechanisms, the number of neurons, and the laminar structure within the populations, we were able to introduce variations in both mean activity and synchronization under a variety of simulated neuronal architectures. Analyses of the simulated spike trains and local field potentials showed that in nearly every domain of the model's parameter space, mean activity and synchronization were tightly coupled. This coupling appears to be mediated by an increase in synchronous gain when effective membrane time constants are lowered by increased activity. These observations show that under the assumptions implicit in our models, rate coding and synchrony coding in neural systems with reciprocal interconnections are two perspectives on the same underlying dynamic. This suggests that in the absence of specific mechanisms decoupling changes in synchronization from firing levels, indexes of brain activity that are based purely on synaptic activity (e.g., functional magnetic resonance imaging) may also be sensitive to changes in synchronous coupling.  相似文献   

7.
We study pulse-coupled neural networks that satisfy only two assumptions: each isolated neuron fires periodically, and the neurons are weakly connected. Each such network can be transformed by a piece-wise continuous change of variables into a phase model, whose synchronization behavior and oscillatory associative properties are easier to analyze and understand. Using the phase model, we can predict whether a given pulse-coupled network has oscillatory associative memory, or what minimal adjustments should be made so that it can acquire memory. In the search for such minimal adjustments we obtain a large class of simple pulse-coupled neural networks that ran memorize and reproduce synchronized temporal patterns the same way a Hopfield network does with static patterns. The learning occurs via modification of synaptic weights and/or synaptic transmission delays.  相似文献   

8.
Many challenging problems that consider the analysis and control of neural brain rhythms have been motivated by the advent of deep brain stimulation as a therapeutic treatment for a wide variety of neurological disorders. In a computational setting, neural rhythms are often modeled using large populations of coupled, conductance-based neurons. Control of such models comes with a long list of challenges: the underlying dynamics are nonnegligibly nonlinear, high dimensional, and subject to noise; hardware and biological limitations place restrictive constraints on allowable inputs; direct measurement of system observables is generally limited; and the resulting systems are typically highly underactuated. In this review article, we highlight a collection of recent analysis techniques and control frameworks that have been developed to contend with these difficulties. Particular emphasis is placed on the problem of desynchronization for a population of pathologically synchronized neural oscillators, a problem that is motivated by applications to Parkinson’s disease where pathological synchronization is thought to contribute to the associated motor control symptoms. We also discuss other recent neural control applications that consider entrainment, phase randomization, synchronization, and clustering.  相似文献   

9.
The synchronous firing of neurons in a pulse-coupled neural network composed of excitatory and inhibitory neurons is analyzed. The neurons are connected by both chemical synapses and electrical synapses among the inhibitory neurons. When electrical synapses are introduced, periodically synchronized firing as well as chaotically synchronized firing is widely observed. Moreover, we find stochastic synchrony where the ensemble-averaged dynamics shows synchronization in the network but each neuron has a low firing rate and the firing of the neurons seems to be stochastic. Stochastic synchrony of chaos corresponding to a chaotic attractor is also found.  相似文献   

10.
In this paper, a synchronization problem is investigated for an array of coupled stochastic discrete-time neural networks with both discrete and distributed time-varying delays. By utilizing a novel Lyapunov function and the Kronecker product, it is shown that the addressed stochastic discrete-time neural networks is synchronized if certain linear matrix inequalities (LMIs) are feasible. Neither any model transformation nor free-weighting matrices are employed in the derivation of the results obtained, and they can be solved efficiently via the Matlab LMI Toolbox. The proposed synchronization criteria are less conservative than some recently known ones in the literature, which is demonstrated via two numerical examples.  相似文献   

11.
This paper presents an exponential synchronization scheme between two chaotic systems with different structures and parameters. A unified model consisting of a linear dynamic system and a bounded static nonlinear operator is employed to describe these totally different chaotic systems. A novel state feedback control law is established to exponentially synchronize the two unified models with different parameters. Most chaotic systems with different structures and parameters, such as Hopfield neural networks, cellular neural networks, Chua’s circuits, unified chaotic systems, Qi systems, and chaotic recurrent multilayer perceptrons, can be transformed into this unified model with the synchronization controller designed in a unified way. Two numerical examples are exploited to illustrate the effectiveness of the proposed design schemes.  相似文献   

12.
In this paper, complex dynamical synchronization in a non-linear model of a neural system is studied, and the computational significance of the behaviours is explored. The local neural dynamics is determined by voltage- and ligand-gated ion channels and feedback between densely interconnected excitatory and inhibitory neurons. A mesoscopic array of local networks is modelled by introducing coupling between the local networks via weak excitatory-to-excitatory connectivity. It is shown that with modulation of this long-range synaptic coupling, the system undergoes a transition from independent oscillations to stable chaotic synchronization. Between these states exists a 'weakly' stable state associated with complex, intermittent behaviour in the temporal domain and clusters of synchronous regions in the spatial domain. The paper concludes with a discussion of the putative relevance of such processes in the brain, including the role of neuromodulatory systems and the mechanisms underlying sensory perception, adaptation, computation and complexity.  相似文献   

13.
Impulses-induced exponential stability in recurrent delayed neural networks   总被引:1,自引:0,他引:1  
The present paper formulates and studies a model of recurrent neural networks with time-varying delays in the presence of impulsive connectivity among the neurons. This model can well describe practical architectures of more realistic neural networks. Some novel yet generic criteria for global exponential stability of such neural networks are derived by establishing an extended Halanay differential inequality on impulsive delayed dynamical systems. The distinctive feature of this work is to address exponential stability issues without a priori stability assumption for the corresponding delayed neural networks without impulses. It is shown that the impulses in neuronal connectivity play an important role in inducing global exponential stability of recurrent delayed neural networks even if it may be unstable or chaotic itself. Furthermore, example and simulation are given to illustrate the practical nature of the novel results.  相似文献   

14.
兴奋性化学突触耦合的神经元的同步   总被引:3,自引:3,他引:0  
基于动力系统的稳定性理论、数值计算分岔图和线性化系统的最大Lyapunov指数,研究了经兴奋性化学耦合的快峰神经元的同步动力学.研究表明,随着一些关键参数的改变,耦合神经元能呈现丰富的同步行为,如各种周期的同步和混沌的同步.研究结果对理解神经元系统的同步运动具有指导意义.  相似文献   

15.
In this paper, the problem of adaptive synchronization of uncertain coupled complex networks is investigated. Some controllers and adaptive laws are designed to ensure achieving synchronization of a general complex network model. In particular, synchronization of coupled stochastic networks subject to random perturbations is studied, with a referenced node introduced as the target node for synchronization. An example is simulated on delayed neural networks coupled in a small‐world network topology, which demonstrates the feasibility and effectiveness of the proposed adaptive control method. Copyright © 2010 John Wiley and Sons Asia Pte Ltd and Chinese Automatic Control Society  相似文献   

16.
The main objective of the present paper is to further investigate finite‐time synchronization of a general complex dynamical network from the viewpoint of dynamics and control. By utilizing the finite‐time stability theory combined with the inequality techniques, several sufficient criteria on finite‐time synchronization are derived analytically. And some effects of control parameters on synchronization speed and synchronization time are also drawn. It is shown that control gains play an important role in making the dynamical networks finite‐time exponentially synchronized. Furthermore, the results are applied to a typical nearest‐neighbor coupled network composing of chaotic FitzHugh‐Nagumo (FHN) neuron oscillators, and numerical simulations are given to demonstrate the effectiveness of the proposed control methodology.  相似文献   

17.
In this study, we propose an adaptive recurrent neural networks synchronization of H-mode and edge localized modes that is important for obtaining a long-pulse tokamak without disruption regime. The deterministic part of the plasma behavior should be synchronized with stochastic part by introducing stochastic artificial neural network.  相似文献   

18.
The neural solids are novel neural networks devised for solving optimization problems. They are dual to Hopfield networks, but with a quartic energy function. These solids are open architectures, in the sense that different choices of the basic elements and interfacings solve different optimization problems. The basic element is the neural resonator (triangle for the three dimensional case), composed of resonant neurons underlying a self-organizing learning. This module is able to solve elementary optimization problems such as the search for the nearest orthonormal matrix to a given one. Then, an example of a more complex solid, the neural decomposer, whose architecture is composed of neural resonators and their mutual connections, is given. This solid can solve more complex optimization problems such as the decomposition of the essential matrix, which is a very important technique in computer vision.  相似文献   

19.
This paper studies projective lag synchronization of coupled neural networks with time delay and parameter mismatch. An adaptive controller is designed to achieve weak projective lag synchronization of coupled neural networks. This method is employed to realize projective lag synchronization between coupled neural systems with an error level. Numerical simulation illustrates the effectiveness of the results.  相似文献   

20.
It is a common practice to adjust the number of hidden neurons in training, and the removal of neurons in neural networks plays an indispensable role in this architecture manipulation. In this paper, a succinct and unified mathematical form is upgraded to the generic case for removing neurons based on orthogonal projection and crosswise propagation in a feedforward layer with different architectures of neural networks, and further developed for several neural networks with different architectures. For a trained neural network, the method is divided into three stages. In the first stage, the output vectors of the feedforward observation layer are classified to clusters. In the second stage, the orthogonal projection is performed to locate a neuron whose output vector can be approximated by the other output vectors in the same cluster with the least information loss. In the third stage, the previous located neuron is removed and the crosswise propagation is implemented in each cluster. On accomplishment of the three stages, the neural network with the pruned architecture is retrained. If the number of clusters is one, the method is degenerated into its special case with only one neuron being removed. The applications to different architectures of neural networks with an extension to the support vector machine are exemplified. The methodology supports in theory large-scale applications of neural networks in the real world. In addition, with minor modifications, the unified method is instructive in pruning other networks as far as they have similar network structure to the ones in this paper. It is concluded that the unified pruning method in this paper equips us an effective and powerful tool to simplify the architecture in neural networks.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号