首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Here, formation of continuous attractor dynamics in a nonlinear recurrent neural network is used to achieve a nonlinear speech denoising method, in order to implement robust phoneme recognition and information retrieval. Formation of attractor dynamics in recurrent neural network is first carried out by training the clean speech subspace as the continuous attractor. Then, it is used to recognize noisy speech with both stationary and nonstationary noise. In this work, the efficiency of a nonlinear feedforward network is compared to the same one with a recurrent connection in its hidden layer. The structure and training of this recurrent connection, is designed in such a way that the network learns to denoise the signal step by step, using properties of attractors it has formed, along with phone recognition. Using these connections, the recognition accuracy is improved 21% for the stationary signal and 14% for the nonstationary one with 0db SNR, in respect to a reference model which is a feedforward neural network.  相似文献   

2.
The speed of processing in the visual cortical areas can be fast, with for example the latency of neuronal responses increasing by only approximately 10 ms per area in the ventral visual system sequence V1 to V2 to V4 to inferior temporal visual cortex. This has led to the suggestion that rapid visual processing can only be based on the feedforward connections between cortical areas. To test this idea, we investigated the dynamics of information retrieval in multiple layer networks using a four-stage feedforward network modelled with continuous dynamics with integrate-and-fire neurons, and associative synaptic connections between stages with a synaptic time constant of 10 ms. Through the implementation of continuous dynamics, we found latency differences in information retrieval of only 5 ms per layer when local excitation was absent and processing was purely feedforward. However, information latency differences increased significantly when non-associative local excitation was included. We also found that local recurrent excitation through associatively modified synapses can contribute significantly to processing in as little as 15 ms per layer, including the feedforward and local feedback processing. Moreover, and in contrast to purely feed-forward processing, the contribution of local recurrent feedback was useful and approximately this rapid even when retrieval was made difficult by noise. These findings suggest that cortical information processing can benefit from recurrent circuits when the allowed processing time per cortical area is at least 15 ms long.  相似文献   

3.
基于SRNN神经网络的汉语文本词类标注方法   总被引:3,自引:0,他引:3  
词类标注是语料库加工流程一个关键环节,是句法,语义标注分析的前提,本文提出了一种基于SRNN神经网络的词类标记方法,SRNN在三层前向神经网络的结构基础上,增加了隐层节点与输入层状态节点之间的反馈联接,这种结构使用网络具有利用上下文词类信息的处理能力,本文还讨论了网络的训练算法,以人工标注的句子作训练集,经过训练收敛后的网络对新语料的词类标注正确率达到了94%。  相似文献   

4.
Principal cells of the hippocampus and of its only cortical input region, the entorhinal cortex exhibit place specific activity in the freely moving rat. While entorhinal cells have widely tuned place fields, hippocampal place fields are more localized and determine not only the rate but also the timing of place cell spikes. Several models have successfully attempted to explain this fine tuning making use of intrahippocampal attractor network dynamics provided by the recurrent collaterals of hippocampal area CA3. Recent experimental evidence shows that CA1 place cells preserve their tuning curves even in the absence of input from CA3. We propose a model in which entorhinal and hippocampal pyramidal cell populations are only connected via feedforward connections. Synaptic transmission in our system is gated by a class of interneurons inhibiting specifically the entorhino-hippocampal pathway. Theta rhythm modulates the activity of each component. Our results show that rhythmic shunting inhibition endows entorhinal cells with a novel type of temporal code conveyed by the phase jitter of individual spikes. This converts coarsely tuned place-specific activity in the entorhinal cortex to velocity-dependent postsynaptic excitation and, thus, provides hippocampal place cells with an input that has recently been proposed to account for their rate and phase coded firing. Hippocampal place fields are generated through this mechanism and also shown to be robust against variations in the level of tonic inhibition.  相似文献   

5.
Some neurons encode information about the orientation or position of an animal, and can maintain their response properties in the absence of visual input. Examples include head direction cells in rats and primates, place cells in rats and spatial view cells in primates. 'Continuous attractor' neural networks model these continuous physical spaces by using recurrent collateral connections between the neurons which reflect the distance between the neurons in the state space (e.g. head direction space) of the animal. These networks maintain a localized packet of neuronal activity representing the current state of the animal. We show how the synaptic connections in a one-dimensional continuous attractor network (of for example head direction cells) could be self-organized by associative learning. We also show how the activity packet could be moved from one location to another by idiothetic (self-motion) inputs, for example vestibular or proprioceptive, and how the synaptic connections could self-organize to implement this. The models described use 'trace' associative synaptic learning rules that utilize a form of temporal average of recent cell activity to associate the firing of rotation cells with the recent change in the representation of the head direction in the continuous attractor. We also show how a nonlinear neuronal activation function that could be implemented by NMDA receptors could contribute to the stability of the activity packet that represents the current state of the animal.  相似文献   

6.
The use of a proposed recurrent hybrid neural network to control of walking robot with four legs is investigated in this paper. A neural networks based control system is utilized to the control of four-legged walking robot. The control system consists of four proposed neural controllers, four standard PD controllers and four-legged planar walking robot. The proposed neural network (NN) is employed as an inverse controller of the robot. The NN has three layers, which are input, hybrid hidden and output layers. In addition to feedforward connections from the input layer to the hidden layer and from the hidden layer to the output layer, there is also feedback connection from the output layer to the hidden layer and from the hidden layer to itself. The reason to use hybrid layer is that robot’s dynamics consists of linear and non-linear parts. The results show that the proposed neural control system has superior performance to control trajectory of walking robot with payload.  相似文献   

7.
We derive a recurrent neural network architecture of single cells in the primary visual cortex that dynamically improves a 2D-Gabor wavelet based representation of an image by minimizing the corresponding reconstruction error via feedback connections. Furthermore, we demonstrate that the reconstruction error is a Lyapunov function of the herein proposed recurrent network. Our model of the primary visual cortex combines a modulatory feedforward strategy and a feedback subtractive correction for obtaining an optimal coding. The fed back error is used in our system for a dynamical improvement of the feedforward Gabor representation of the images, in the sense that the feedforward redundant representation due to the non-orthogonality of the Gabor wavelets is dynamically corrected. The redundancy of the Gabor feature representation is therefore dynamically eliminated by improving the reconstruction capability of the internal representation. The dynamics therefore introduce a nonlinear correction to the standard linear representation of Gabor filters that generates a more efficient predictive coding.  相似文献   

8.
Different models of attractor networks have been proposed to form cell assemblies. Among them, networks with a fixed synaptic matrix can be distinguished from those including learning dynamics, since the latter adapt the attractor landscape of the lateral connections according to the statistics of the presented stimuli, yielding a more complex behavior. We propose a new learning rule that builds internal representations of input timuli as attractors of neurons in a recurrent network. The dynamics of activation and synaptic adaptation are analyzed in experiments where representations for different input patterns are formed, focusing on the properties of the model as a memory system. The experimental results are exposed along with a survey of different Hebbian rules proposed in the literature for attractors formation. These rules are compared with the help of a new tool, the learning map, where LTP and LTD, as well as homo- and heterosynaptic competition, can be graphically interpreted.  相似文献   

9.
A new multilayer incremental neural network (MINN) architecture and its performance in classification of biomedical images is discussed. The MINN consists of an input layer, two hidden layers and an output layer. The first stage between the input and first hidden layer consists of perceptrons. The number of perceptrons and their weights are determined by defining a fitness function which is maximized by the genetic algorithm (GA). The second stage involves feature vectors which are the codewords obtained automaticaly after learning the first stage. The last stage consists of OR gates which combine the nodes of the second hidden layer representing the same class. The comparative performance results of the MINN and the backpropagation (BP) network indicates that the MINN results in faster learning, much simpler network and equal or better classification performance.  相似文献   

10.
A computationally efficient artificial neural network (ANN) for the purpose of dynamic nonlinear system identification is proposed. The major drawback of feedforward neural networks, such as multilayer perceptrons (MLPs) trained with the backpropagation (BP) algorithm, is that they require a large amount of computation for learning. We propose a single-layer functional-link ANN (FLANN) in which the need for a hidden layer is eliminated by expanding the input pattern by Chebyshev polynomials. The novelty of this network is that it requires much less computation than that of a MLP. We have shown its effectiveness in the problem of nonlinear dynamic system identification. In the presence of additive Gaussian noise, the performance of the proposed network is found to be similar or superior to that of a MLP. A performance comparison in terms of computational complexity has also been carried out.  相似文献   

11.
We show how a Hopfield network with modifiable recurrent connections undergoing slow Hebbian learning can extract the underlying geometry of an input space. First, we use a slow and fast analysis to derive an averaged system whose dynamics derives from an energy function and therefore always converges to equilibrium points. The equilibria reflect the correlation structure of the inputs, a global object extracted through local recurrent interactions only. Second, we use numerical methods to illustrate how learning extracts the hidden geometrical structure of the inputs. Indeed, multidimensional scaling methods make it possible to project the final connectivity matrix onto a Euclidean distance matrix in a high-dimensional space, with the neurons labeled by spatial position within this space. The resulting network structure turns out to be roughly convolutional. The residual of the projection defines the nonconvolutional part of the connectivity, which is minimized in the process. Finally, we show how restricting the dimension of the space where the neurons live gives rise to patterns similar to cortical maps. We motivate this using an energy efficiency argument based on wire length minimization. Finally, we show how this approach leads to the emergence of ocular dominance or orientation columns in primary visual cortex via the self-organization of recurrent rather than feedforward connections. In addition, we establish that the nonconvolutional (or long-range) connectivity is patchy and is co-aligned in the case of orientation learning.  相似文献   

12.
Inspired by recent studies regarding dendritic computation, we constructed a recurrent neural network model incorporating dendritic lateral inhibition. Our model consists of an input layer and a neuron layer that includes excitatory cells and an inhibitory cell; this inhibitory cell is activated by the pooled activities of all the excitatory cells, and it in turn inhibits each dendritic branch of the excitatory cells that receive excitations from the input layer. Dendritic nonlinear operation consisting of branch-specifically rectified inhibition and saturation is described by imposing nonlinear transfer functions before summation over the branches. In this model with sufficiently strong recurrent excitation, on transiently presenting a stimulus that has a high correlation with feed- forward connections of one of the excitatory cells, the corresponding cell becomes highly active, and the activity is sustained after the stimulus is turned off, whereas all the other excitatory cells continue to have low activities. But on transiently presenting a stimulus that does not have high correlations with feedforward connections of any of the excitatory cells, all the excitatory cells continue to have low activities. Interestingly, such stimulus-selective sustained response is preserved for a wide range of stimulus intensity. We derive an analytical formulation of the model in the limit where individual excitatory cells have an infinite number of dendritic branches and prove the existence of an equilibrium point corresponding to such a balanced low-level activity state as observed in the simulations, whose stability depends solely on the signal-to-noise ratio of the stimulus. We propose this model as a model of stimulus selectivity equipped with self-sustainability and intensity-invariance simultaneously, which was difficult in the conventional competitive neural networks with a similar degree of complexity in their network architecture. We discuss the biological relevance of the model in a general framework of computational neuroscience.  相似文献   

13.
Classification ability of single hidden layer feedforward neuralnetworks   总被引:1,自引:0,他引:1  
Multilayer perceptrons with hard-limiting (signum) activation functions can form complex decision regions. It is well known that a three-layer perceptron (two hidden layers) can form arbitrary disjoint decision regions and a two-layer perceptron (one hidden layer) can form single convex decision regions. This paper further proves that single hidden layer feedforward neural networks (SLFN) with any continuous bounded nonconstant activation function or any arbitrary bounded (continuous or not continuous) activation function which has unequal limits at infinities (not just perceptrons) can form disjoint decision regions with arbitrary shapes in multidimensional cases, SLFN with some unbounded activation function can also form disjoint decision regions with arbitrary shapes.  相似文献   

14.
《Knowledge》2006,19(5):348-355
Objects of interest are represented in the brain simultaneously in different frames of reference. Knowing the positions of one’s head and eyes, for example, one can compute the body-centred position of an object from its perceived coordinates on the retinae. We propose a simple and fully trained attractor network which computes head-centred coordinates given eye position and a perceived retinal object position. We demonstrate this system on artificial data and then apply it within a fully neurally implemented control system which visually guides a simulated robot to a table for grasping an object. The integrated system has as input a primitive visual system with a what–where pathway which localises the target object in the visual field. The coordinate transform network considers the visually perceived object position and the camera pan-tilt angle and computes the target position in a body-centred frame of reference. This position is used by a reinforcement-trained network to dock a simulated PeopleBot robot at a table for reaching the object. Hence, neurally computing coordinate transformations by an attractor network has biological relevance and technical use for this important class of computations.  相似文献   

15.
We provide an analytical recurrent solution for the firing rates and cross-correlations of feedforward networks with arbitrary connectivity, excitatory or inhibitory, in response to steady-state spiking input to all neurons in the first network layer. Connections can go between any two layers as long as no loops are produced. Mean firing rates and pairwise cross-correlations of all input neurons can be chosen individually. We apply this method to study the propagation of rate and synchrony information through sample networks to address the current debate regarding the efficacy of rate codes versus temporal codes. Our results from applying the network solution to several examples support the following conclusions: (1) differential propagation efficacy of rate and synchrony to higher layers of a feedforward network is dependent on both network and input parameters, and (2) previous modeling and simulation studies exclusively supporting either rate or temporal coding must be reconsidered within the limited range of network and input parameters used. Our exact, analytical solution for feedforward networks of coincidence detectors should prove useful for further elucidating the efficacy and differential roles of rate and temporal codes in terms of different network and input parameter ranges.  相似文献   

16.
This paper puts forward a novel recurrent neural network (RNN), referred to as the context layered locally recurrent neural network (CLLRNN) for dynamic system identification. The CLLRNN is a dynamic neural network which appears in effective in the input–output identification of both linear and nonlinear dynamic systems. The CLLRNN is composed of one input layer, one or more hidden layers, one output layer, and also one context layer improving the ability of the network to capture the linear characteristics of the system being identified. Dynamic memory is provided by means of feedback connections from nodes in the first hidden layer to nodes in the context layer and in case of being two or more hidden layers, from nodes in a hidden layer to nodes in the preceding hidden layer. In addition to feedback connections, there are self-recurrent connections in all nodes of the context and hidden layers. A dynamic backpropagation algorithm with adaptive learning rate is derived to train the CLLRNN. To demonstrate the superior properties of the proposed architecture, it is applied to identify not only linear but also nonlinear dynamic systems. The efficiency of the proposed architecture is demonstrated by comparing the results to some existing recurrent networks and design configurations. In addition, performance of the CLLRNN is analyzed through an experimental application to a dc motor connected to a load to show practicability and effectiveness of the proposed neural network. Results of the experimental application are presented to make a quantitative comparison with an existing recurrent network in the literature.  相似文献   

17.
In short-term memory networks, transient stimuli are represented by patterns of neural activity that persist long after stimulus offset. Here, we compare the performance of two prominent classes of memory networks, feedback-based attractor networks and feedforward networks, in conveying information about the amplitude of a briefly presented stimulus in the presence of gaussian noise. Using Fisher information as a metric of memory performance, we find that the optimal form of network architecture depends strongly on assumptions about the forms of nonlinearities in the network. For purely linear networks, we find that feedforward networks outperform attractor networks because noise is continually removed from feedforward networks when signals exit the network; as a result, feedforward networks can amplify signals they receive faster than noise accumulates over time. By contrast, attractor networks must operate in a signal-attenuating regime to avoid the buildup of noise. However, if the amplification of signals is limited by a finite dynamic range of neuronal responses or if noise is reset at the time of signal arrival, as suggested by recent experiments, we find that attractor networks can outperform feedforward ones. Under a simple model in which neurons have a finite dynamic range, we find that the optimal attractor networks are forgetful if there is no mechanism for noise reduction with signal arrival but nonforgetful (perfect integrators) in the presence of a strong reset mechanism. Furthermore, we find that the maximal Fisher information for the feedforward and attractor networks exhibits power law decay as a function of time and scales linearly with the number of neurons. These results highlight prominent factors that lead to trade-offs in the memory performance of networks with different architectures and constraints, and suggest conditions under which attractor or feedforward networks may be best suited to storing information about previous stimuli.  相似文献   

18.
One of the main objectives of smart homes is healthcare monitoring and assistance, especially for elderly and disabled people. Therefore, an accurate prediction of the inhabitant behavior is very helpful to provide the required assistance. This work aims to propose a prediction model that satisfies the accuracy as well as the rapidity of the learning phase. To do so, we propose to improve the existing extreme learning machine (ELM) model by defining a recurrent form. This form ensures a temporal relationship of inputs between observations at different time steps. The new model uses feedback connections to the input layer from the output layer which allows the output to be included in the long-term prediction. A recurrent dynamic network, with feedback connections of the output of the network, is proposed to predict the future series representing future activities of the inhabitant. The resulting model, called Recurrent Extreme Learning Machine (RELM), provides the ability to learn the human behavior and ensures a good balance between the learning time and the prediction accuracy. The input data is based on the real data representing the activities of persons belonging to the profile of first level (i.e. P 1) as measured by the dependency model called Functional Autonomy Measurement System (SMAF) used in the geriatric domain. The experimental results reveal that the proposed RELM model requires a minimum time during the learning phase with a better performance compared to existing models.  相似文献   

19.
针对葡萄酒品质预测模型难以建立的问题,提出一种基于模糊递归小波神经网络的葡萄酒品质预测模型。利用葡萄酒物理化学指标和品酒师打分作为模型的输入输出,采用梯度下降算法在线学习隶属函数层中心、宽度和小波函数平移因子、伸缩因子、自反馈权重因子以及输出层权值。仿真实验时,首先利用Mackey-Glass混沌时间序列进行了性能测试,然后利用UCI数据集葡萄酒品质数据对所建立的品质预测模型进行了验证。结果显示,与多层感知器、径向基函数神经网络等传统前馈神经网络相比,构建的模糊递归小波神经网络品质预测模型具有更高的预测精度,更加适合于葡萄酒的品质预测。  相似文献   

20.
Neural networks (NNs) represent a familiar artificial intelligence approach widely applied in many fields and to a wide range of issues. The back propagation network (BPN) is one of the most well-known NNs, comprising multilayer perceptrons (MLPs) with an error back propagation learning algorithm. BPN typically employs associate multiplicative weightings for layer connections. For single connections, BPN combines neuron inputs linearly to neuron outputs. In this study, the author develops and embeds high order connections (exponent multipliers) into the BPN. The resultant proposed hybrid high order neural network (HHONN) is intended to be applicable to both linear and high order connections. HHONN allows an additional connection type for BPN, which permits BPN to adapt to different scenarios. In this paper, learning equations for both weighting and high order connections are introduced in their general forms. A feedforward neural network with a topology of two hidden layers and one high order connection was developed and studied to confirm the improved performance of developed HHONN models. Case studies, including two basic tests (a function approximation and the TC problem) and squat wall strength learning, were used to verify HHONN performance. Results showed that, when the high order connection was employed anywhere except the eventual connection, HHONN delivered better results than achievable using traditional BPN. Such results show that HHONN successfully introduces high order connections into BPN.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号