首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Computing with continuous attractors: stability and online aspects   总被引:1,自引:0,他引:1  
Wu S  Amari S 《Neural computation》2005,17(10):2215-2239
Two issues concerning the application of continuous attractors in neural systems are investigated: the computational robustness of continuous attractors with respect to input noises and the implementation of Bayesian online decoding. In a perfect mathematical model for continuous attractors, decoding results for stimuli are highly sensitive to input noises, and this sensitivity is the inevitable consequence of the system's neutral stability. To overcome this shortcoming, we modify the conventional network model by including extra dynamical interactions between neurons. These interactions vary according to the biologically plausible Hebbian learning rule and have the computational role of memorizing and propagating stimulus information accumulated with time. As a result, the new network model responds to the history of external inputs over a period of time, and hence becomes insensitive to short-term fluctuations. Also, since dynamical interactions provide a mechanism to convey the prior knowledge of stimulus, that is, the information of the stimulus presented previously, the network effectively implements online Bayesian inference. This study also reveals some interesting behavior in neural population coding, such as the trade-off between decoding stability and the speed of tracking time-varying stimuli, and the relationship between neural tuning width and the tracking speed.  相似文献   

2.
神经元网络奇怪吸引子的计算机模拟   总被引:5,自引:0,他引:5  
为模拟神经元网络的混沌现象,阐述了相空间重构技术,介绍了由一维可观察量计算系统的最大Lyapunov指数和关联维数的方法。利用Lyapunov指数作判据,构造了3层反馈神经元网络的奇怪吸引子,分析了奇怪吸引子的运动特征并计算了奇怪吸引子的关联维数。研究表明混沌神经元网络具有复杂的动力学特征,同时存在各种吸引子,不仅有不动点、极限环、环面,而且有奇怪吸引子。  相似文献   

3.
Attractor networks have been one of the most successful paradigms in neural computation, and have been used as models of computation in the nervous system. Recently, we proposed a paradigm called 'latent attractors' where attractors embedded in a recurrent network via Hebbian learning are used to channel network response to external input rather than becoming manifest themselves. This allows the network to generate context-sensitive internal codes in complex situations. Latent attractors are particularly helpful in explaining computations within the hippocampus--a brain region of fundamental significance for memory and spatial learning. Latent attractor networks are a special case of associative memory networks. The model studied here consists of a two-layer recurrent network with attractors stored in the recurrent connections using a clipped Hebbian learning rule. The firing in both layers is competitive--K winners take all firing. The number of neurons allowed to fire, K, is smaller than the size of the active set of the stored attractors. The performance of latent attractor networks depends on the number of such attractors that a network can sustain. In this paper, we use signal-to-noise methods developed for standard associative memory networks to do a theoretical and computational analysis of the capacity and dynamics of latent attractor networks. This is an important first step in making latent attractors a viable tool in the repertoire of neural computation. The method developed here leads to numerical estimates of capacity limits and dynamics of latent attractor networks. The technique represents a general approach to analyse standard associative memory networks with competitive firing. The theoretical analysis is based on estimates of the dendritic sum distributions using Gaussian approximation. Because of the competitive firing property, the capacity results are estimated only numerically by iteratively computing the probability of erroneous firings. The analysis contains two cases: the simple case analysis which accounts for the correlations between weights due to shared patterns and the detailed case analysis which includes also the temporal correlations between the network's present and previous state. The latter case predicts better the dynamics of the network state for non-zero initial spurious firing. The theoretical analysis also shows the influence of the main parameters of the model on the storage capacity.  相似文献   

4.
5.
Continuous attractors of a class of recurrent neural networks   总被引:1,自引:0,他引:1  
Recurrent neural networks (RNNs) may possess continuous attractors, a property that many brain theories have implicated in learning and memory. There is good evidence for continuous stimuli, such as orientation, moving direction, and the spatial location of objects could be encoded as continuous attractors in neural networks. The dynamical behaviors of continuous attractors are interesting properties of RNNs. This paper proposes studying the continuous attractors for a class of RNNs. In this network, the inhibition among neurons is realized through a kind of subtractive mechanism. It shows that if the synaptic connections are in Gaussian shape and other parameters are appropriately selected, the network can exactly realize continuous attractor dynamics. Conditions are derived to guarantee the validity of the selected parameters. Simulations are employed for illustration.  相似文献   

6.
根据粗糙集方法所导出的规则构造模糊—神经网络,由规则的参数和离散化结果估计网络参数的初始值,使网络经训练能较快收敛并达到最优值。将其应用于PTA装置溶剂脱水塔精馏过程建模,所建模型的性能优于普通前馈神经网络,粗糙—模糊神经网络可以消除决策系统的冗余信息,降低模型复杂度。  相似文献   

7.
Abstract: This paper presents the architecture of a neural network expert system shell. The system captures every rule as a rudimentary neural network, which is called a network element (netel). The aim is to preserve the semantic structure of the expert system rules, while incorporating the learning capability of neural networks into the inferencing mechanism. These netel rules are dynamically linked up to form the rule-tree during the inferencing process, just as a conventional expert system does. The system is also able to adjust its inference strategy according to different users and situations. A rule editor is provided to enable easy maintenance of the netel rules. These components are housed under a user-friendly interface. An application  相似文献   

8.
In this paper, a fuzzy inference network model for search strategy using neural logic network is presented. The model describes search strategy, and neural logic network is used to search. Fuzzy logic can bring about appropriate inference results by ignoring some information in the reasoning process. Neural logic networks are powerful tools for the reasoning process but not appropriate for the logical reasoning. To model human knowledge, besides the reasoning process capability, the logical reasoning capability is equally important. Another new neural network called neural logic network is able to do the logical reasoning. Because the fuzzy inference is a fuzzy logical reasoning, we construct a fuzzy inference network model based on the neural logic network, extending the existing rule inference network. And the traditional propagation rule is modified.  相似文献   

9.
Fractal variation of dynamical attractors is observed in complex-valued neural networks where a negative-resistance nonlinearity is introduced as the neuron nonlinear function. When a parameter of the negative-resistance nonlinearity is continuously changed, it is found that the network attractors present a kind of fractal variation in a certain parameter range between deterministic and non-deterministic attractor ranges. The fractal pattern has a convergence point, which is also a critical point where deterministic attractors change into chaotic attractors. This result suggests that the complex-valued neural networks having negative-resistance nonlinearity present the dynamics complexity at the so-called edge of chaos.The author is also with the Research Center for Advanced Science and Technology (RCAST), University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153, Japan  相似文献   

10.
A neural network inverse dynamics controller with adjustable weights is compared with a computed-torque type adaptive controller. Lyapunov stability techniques, usually applied to adaptive systems, are used to derive a globally asymptotically stable adaptation law for a single-layer neural network controller that bears similarities to the well-known delta rule for neural networks. This alternative learning rule allows the learning rates of each connection weight to be individually adjusted to give faster convergence. The role of persistently exciting inputs in ensuring parameter convergence, often mentioned in the context of adaptive systems, is emphasized in relation to the convergence of neural network weights. A coupled, compound pendulum system is used to develop inverse dynamics controllers based on adaptive and neural network techniques. Adaptation performance is compared for a model-based adaptive controller and a simple neural network utilizing both delta-rule learning and the alternative adaptation law.  相似文献   

11.
丁圣  高风 《计算机仿真》2006,23(11):259-262
股票市场是一个复杂的非线性动态系统,利用传统的时间序列预测技术很难揭示其内在规律,而近十几年来发展起来的神经网络理论逐渐成为非线性动态系统预测与建模的强有力工其。该文介绍了小波分析中的趋势提取技术,建立小波分析与神经网络相结合的预测模型,将该模型应用于股票平均线交易规则中,同时还与普通神经网络预测模型进行厂对比,研究实例表明,小波神经网络方法提高了预测精度,对移动平均线交易规则作了一种有效的补允,是股市技术分析的一种自效实用的方法。  相似文献   

12.
Suemitsu Y  Nara S 《Neural computation》2004,16(9):1943-1957
Chaotic dynamics introduced into a neural network model is applied to solving two-dimensional mazes, which are ill-posed problems. A moving object moves from the position at t to t + 1 by simply defined motion function calculated from firing patterns of the neural network model at each time step t. We have embedded several prototype attractors that correspond to the simple motion of the object orienting toward several directions in two-dimensional space in our neural network model. Introducing chaotic dynamics into the network gives outputs sampled from intermediate state points between embedded attractors in a state space, and these dynamics enable the object to move in various directions. System parameter switching between a chaotic and an attractor regime in the state space of the neural network enables the object to move to a set target in a two-dimensional maze. Results of computer simulations show that the success rate for this method over 300 trials is higher than that of random walk. To investigate why the proposed method gives better performance, we calculate and discuss statistical data with respect to dynamical structure.  相似文献   

13.
改进了一种基于神经网络吸引子计算的无线传感器公钥加密算法,通过引入改进的q-composite密钥分发机制替代原有的Differ-Hellman密钥协商算法,利用其对神经网络初始化从而产生加密数据的吸引子。在对明文加密之前使用吸引子Markov模型编码,最后将吸引域作为密文输出从而达到加密效果。在OMNet++与Matlab环境下对算法进行了仿真并与同类算法在能耗方面进行了对比。对算法的雪崩效应,密文平衡性及独立性进行了测试,并给出了算法的安全性、扩展性和抗破译能力相关分析。  相似文献   

14.
GenSoFNN: a generic self-organizing fuzzy neural network   总被引:3,自引:0,他引:3  
Existing neural fuzzy (neuro-fuzzy) networks proposed in the literature can be broadly classified into two groups. The first group is essentially fuzzy systems with self-tuning capabilities and requires an initial rule base to be specified prior to training. The second group of neural fuzzy networks, on the other hand, is able to automatically formulate the fuzzy rules from the numerical training data. No initial rule base needs to be specified prior to training. A cluster analysis is first performed on the training data and the fuzzy rules are subsequently derived through the proper connections of these computed clusters. However, most existing neural fuzzy systems (whether they belong to the first or second group) encountered one or more of the following major problems. They are (1) inconsistent rule-base; (2) heuristically defined node operations; (3) susceptibility to noisy training data and the stability-plasticity dilemma; and (4) needs for prior knowledge such as the number of clusters to be computed. Hence, a novel neural fuzzy system that is immune to the above-mentioned deficiencies is proposed in this paper. This new neural fuzzy system is named the generic self-organizing fuzzy neural network (GenSoFNN). The GenSoFNN network has strong noise tolerance capability by employing a new clustering technique known as discrete incremental clustering (DIC). The fuzzy rule base of the GenSoFNN network is consistent and compact as GenSoFNN has built-in mechanisms to identify and prune redundant and/or obsolete rules. Extensive simulations were conducted using the proposed GenSoFNN network and its performance is encouraging when benchmarked against other neural and neural fuzzy systems.  相似文献   

15.
Traditional connectionist theory-refinement systems map the dependencies of a domain-specific rule base into a neural network, and then refine this network using neural learning techniques. Most of these systems, however, lack the ability to refine their network's topology and are thus unable to add new rules to the (reformulated) rule base. Therefore, with domain theories that lack rules, generalization is poor, and training can corrupt the original rules — even those that were initially correct. The paper presents TopGen, an extension to the KBANN algorithm, which heuristically searches for possible expansions to the KBANN network. TopGen does this by dynamically adding hidden nodes to the neural representation of the domain theory, in a manner that is analogous to the adding of rules and conjuncts to the symbolic rule base. Experiments indicate that the method is able to heuristically find effective places to add nodes to the knowledge bases of four real-world problems, as well as an artificial chess domain. The experiments also verify that new nodes must be added in an intelligent manner. The algorithm showed statistically significant improvements over the KBANN algorithm in all five domains.  相似文献   

16.
In this paper, a hybrid neural network model, based on the integration of fuzzy ARTMAP (FAM) and the rectangular basis function network (RecBFN), which is capable of learning and revealing fuzzy rules is proposed. The hybrid network is able to classify data samples incrementally and, at the same time, to extract rules directly from the network weights for justifying its predictions. With regards to process systems engineering, the proposed network is applied to a fault detection and diagnosis task in a power generation station. Specifically, the efficiency of the network in monitoring the operating conditions of a circulating water (CW) system is evaluated by using a set of real sensor measurements collected from the power station. The rules extracted are analyzed, discussed, and compared with those from a rule extraction method of FAM. From the comparison results, it is observed that the proposed network is able to extract more meaningful rules with a lower degree of rule redundancy and higher interpretability within the neural network framework. The extracted rules are also in agreement with experts’ opinions for maintaining the CW system in the power generation plant.  相似文献   

17.
基于粗糙集和神经网络集成的贷款风险5级分类   总被引:3,自引:0,他引:3  
建立了粗糙集与神经网络集成的贷款风险5级分类评价模型,该模型首先利用自组织映射神经网络离散化财务数据并应用遗传算法约简评价指标;基于最小约简指标提取贷款风险5级分类判别规则以及对BP神经网络进行训练;最后使用粗糙集理论判别与规则库匹配的检验样本风险等级,使用神经网络判别不与规则库任何规则匹配的检验样本风险等级.利用贷款企业数据库698家5级分类样本进行实证研究,结果表明,粗糙集与神经网络集成的判别模型预测准确率达到82.07%,是一种有效的贷款风险5级分类评价工具.  相似文献   

18.
神经网络的混沌运动与控制   总被引:5,自引:0,他引:5  
本文采用一种由混沌神经元构成的联想记忆神经网络.以混沌神经网络为基础,研究其非线 性动力学特性、混沌吸引子轨迹以及对初始条件的敏感性, 实现混沌神经网络的动态联想记 忆功能.在网络输入发生较大变异情况下所发生的失忆,本文采用时空系统混沌控制的钉扎 反馈方法,使网络恢复记忆.上述研究通过对异步电动机故障的动态记忆和恢复控制的仿真 实验得到证实.本文研究结果表明,在国内外对神经网络混沌控制研究的热点中,时空系统 的钉扎反馈控制是一种值得推荐的方法;神经网络的混沌控制扩大了网络的容错性,进而提 高了混沌神经网络的实用性,这将在复杂模式识别,图象处理等工程上具有广阔的应用前景 .  相似文献   

19.
首先概括了能实现混沌动力学特点的主要神经网络模型及其产生混沌同步,混沌序列和混沌吸引子等复杂性的基本原理,介绍了如何利用混沌同步,混沌轨迹序列和混沌吸引子等复杂性特点实现通信加密算法,最后总结有关神经网络的混沌特性及其加密通信应用中需要进一步研究的一些课题。  相似文献   

20.
This letter aims at studying the impact of iterative Hebbian learning algorithms on the recurrent neural network's underlying dynamics. First, an iterative supervised learning algorithm is discussed. An essential improvement of this algorithm consists of indexing the attractor information items by means of external stimuli rather than by using only initial conditions, as Hopfield originally proposed. Modifying the stimuli mainly results in a change of the entire internal dynamics, leading to an enlargement of the set of attractors and potential memory bags. The impact of the learning on the network's dynamics is the following: the more information to be stored as limit cycle attractors of the neural network, the more chaos prevails as the background dynamical regime of the network. In fact, the background chaos spreads widely and adopts a very unstructured shape similar to white noise. Next, we introduce a new form of supervised learning that is more plausible from a biological point of view: the network has to learn to react to an external stimulus by cycling through a sequence that is no longer specified a priori. Based on its spontaneous dynamics, the network decides "on its own" the dynamical patterns to be associated with the stimuli. Compared with classical supervised learning, huge enhancements in storing capacity and computational cost have been observed. Moreover, this new form of supervised learning, by being more "respectful" of the network intrinsic dynamics, maintains much more structure in the obtained chaos. It is still possible to observe the traces of the learned attractors in the chaotic regime. This complex but still very informative regime is referred to as "frustrated chaos."  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号