首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Learning in the multiple class random neural network   总被引:3,自引:0,他引:3  
Spiked recurrent neural networks with "multiple classes" of signals have been recently introduced by Gelenbe and Fourneau (1999), as an extension of the recurrent spiked random neural network introduced by Gelenbe (1989). These new networks can represent interconnected neurons, which simultaneously process multiple streams of data such as the color information of images, or networks which simultaneously process streams of data from multiple sensors. This paper introduces a learning algorithm which applies both to recurrent and feedforward multiple signal class random neural networks (MCRNNs). It is based on gradient descent optimization of a cost function. The algorithm exploits the analytical properties of the MCRNN and requires the solution of a system of nC linear and nC nonlinear equations (where C is the number of signal classes and n is the number of neurons) each time the network learns a new input-output pair. Thus, the algorithm is of O([nC]/sup 3/) complexity for the recurrent case, and O([nC]/sup 2/) for a feedforward MCRNN. Finally, we apply this learning algorithm to color texture modeling (learning), based on learning the weights of a recurrent network directly from the color texture image. The same trained recurrent network is then used to generate a synthetic texture that imitates the original. This approach is illustrated with various synthetic and natural textures.  相似文献   

2.
Gelenbe has modeled neural networks using an analogy with queuing theory. This model (called Random Neural Network) calculates the probability of activation of the neurons in the network. Recently, Fourneau and Gelenbe have proposed an extension of this model, called multiple classes random neural network model. The purpose of this paper is to describe the use of the multiple classes random neural network model to learn patterns having different colors. We propose a learning algorithm for the recognition of color patterns based upon non-linear equations of the multiple classes random neural network model using gradient descent of a quadratic error function. In addition, we propose a progressive retrieval process with adaptive threshold values. The experimental evaluation shows that the learning algorithm provides good results.  相似文献   

3.
It is well known that information processing in the brain depends on neuron systems. Simple neuron systems are neural networks, and their learning methods have been studied. However, we believe that research on large-scale neural network systems is still incomplete. Here, we propose a learning method for millions of neurons as resources for a neuron computer. The method is a type of recurrent path-selection, so the neural network objective must have nesting structures. This method is executed at high speed. When information processing is executed by analogue signals, the accumulation of errors is a grave problem. We equipped a neural network with a digitizer and AD/DA (Analogue Digital) converters constructed of neurons. They retain all information signals and guarantee precision in complex operations. By using these techniques, we generated an image shifter constructed of 8.6 million neurons. We believe that there is the potential to design a neuron computer using this scheme. This work was presented in part at the Fifth International Symposium on Artificial Life and Robotics, Oita, Japan, January 26–28, 2000  相似文献   

4.
Rough Neural Computing in Signal Analysis   总被引:4,自引:0,他引:4  
This paper introduces an application of a particular form of rough neural computing in signal analysis. The form of rough neural network used in this study is based on rough sets, rough membership functions, and decision rules. Two forms of neurons are found in such a network: rough membership function neurons and decider neurons. Each rough membership function neuron constructs upper and lower approximation equivalence classes in response to input signals as an aid to classifying inputs. In this paper, the output of a rough membership function neuron results from the computation performed by a rough membership function in determining degree of overlap between an upper approximation set representing approximate knowledge about inputs and a set of measurements representing certain knowledge about a particular class of objects. Decider neurons implement granules derived from decision rules extracted from data sets using rough set theory. A decider neuron instantiates approximate reasoning in assessing rough membership function values gleaned from input data. An introduction to the basic concepts underlying rough membership neural networks is briefly given. An application of rough neural computing in classifying the power system faults is considered.  相似文献   

5.
Since Hopfield's seminal work on energy functions for neural networks and their consequence for the approximate solution of optimization problems, much attention has been devoted to neural heuristics for combinatorial optimization. These heuristics are often very time-consuming because of the need for randomization or Monte Carlo simulation during the search for solutions. In this paper, we propose a general energy function for a new neural model, the random neural model of Gelenbe. This model proposes a scheme of interaction between the neurons and not a dynamic equation of the system. Then, we apply this general energy function to different optimization problems.  相似文献   

6.
王治忠  庞晨 《计算机应用》2020,40(3):832-836
针对从神经元响应信号中解码视觉输入的问题,提出了一种利用神经元动作电位(Spike)信号重建视觉输入的方法。首先,记录鸽视顶盖(OT)神经元的Spike信号,提取Spike发放率特征;然后,构建线性逆滤波器和卷积神经网络重建模型,实现视觉输入的重建;最后,对通道数量、时间窗口、数据时间长度、延迟时间等参数进行优化。在相同参数条件下,利用线性逆滤波器重建图像的互相关系数达到0.910 7±0.021 9,利用卷积神经网络模型重建图像的互相关系数达到0.927 1±0.017 6。重建结果表明,提取神经元Spike发放率特征并运用线性逆滤波器和卷积神经网络重建模型可以有效重建视觉输入。  相似文献   

7.
Classical statistical techniques for prediction reach their limitations in applications with nonlinearities in the data set; nevertheless, neural models can counteract these limitations. In this paper, we present a recurrent neural model where we associate an adaptative time constant to each neuron-like unit and a learning algorithm to train these dynamic recurrent networks. We test the network by training it to predict the Mackey-Glass chaotic signal. To evaluate the quality of the prediction, we computed the power spectra of the two signals and computed the associated fractional error. Results show that the introduction of adaptative time constants associated to each neuron of a recurrent network improves the quality of the prediction and the dynamical features of a neural model. The performance of such dynamic recurrent neural networks outperform time-delay neural networks.  相似文献   

8.
This letter studies the properties of the random neural networks (RNNs) with state-dependent firing neurons. It is assumed that the times between successive signal emissions of a neuron are dependent on the neuron potential. Under certain conditions, the networks keep the simple product form of stationary solutions and exhibit enhanced capacity of adjusting the probability distribution of the neuron states. It is demonstrated that desired associative memory states can be stored in the networks.  相似文献   

9.
A recurrent stochastic binary network   总被引:1,自引:0,他引:1  
Stochastic neural networks are usually built by introducing random fluctuations into the network. A natural method is to use stochastic connections rather than stochastic activation functions. We propose a new model in which each neuron has very simple functionality but all the connections are stochastic. It is shown that the stationary distribution of the network uniquely exists and it is approxi-mately a Boltzmann-Gibbs distribution. The relationship between the model and the Markov random field is discussed. New techniques to implement simulated annealing and Boltzmann learning are pro-posed. Simulation results on the graph bisection problem and image recognition show that the network is powerful enough to solve real world problems.  相似文献   

10.
11.
Lo JT 《Neural computation》2011,23(10):2626-2682
A biologically plausible low-order model (LOM) of biological neural networks is proposed. LOM is a recurrent hierarchical network of models of dendritic nodes and trees; spiking and nonspiking neurons; unsupervised, supervised covariance and accumulative learning mechanisms; feedback connections; and a scheme for maximal generalization. These component models are motivated and necessitated by making LOM learn and retrieve easily without differentiation, optimization, or iteration, and cluster, detect, and recognize multiple and hierarchical corrupted, distorted, and occluded temporal and spatial patterns. Four models of dendritic nodes are given that are all described as a hyperbolic polynomial that acts like an exclusive-OR logic gate when the model dendritic nodes input two binary digits. A model dendritic encoder that is a network of model dendritic nodes encodes its inputs such that the resultant codes have an orthogonality property. Such codes are stored in synapses by unsupervised covariance learning, supervised covariance learning, or unsupervised accumulative learning, depending on the type of postsynaptic neuron. A masking matrix for a dendritic tree, whose upper part comprises model dendritic encoders, enables maximal generalization on corrupted, distorted, and occluded data. It is a mathematical organization and idealization of dendritic trees with overlapped and nested input vectors. A model nonspiking neuron transmits inhibitory graded signals to modulate its neighboring model spiking neurons. Model spiking neurons evaluate the subjective probability distribution (SPD) of the labels of the inputs to model dendritic encoders and generate spike trains with such SPDs as firing rates. Feedback connections from the same or higher layers with different numbers of unit-delay devices reflect different signal traveling times, enabling LOM to fully utilize temporally and spatially associated information. Biological plausibility of the component models is discussed. Numerical examples are given to demonstrate how LOM operates in retrieving, generalizing, and unsupervised and supervised learning.  相似文献   

12.
针对稀疏信号的准确和实时恢复问题,提出了一种基于神经动力学优化的压缩感知信号恢复方法。通过引入反馈神经网络(Recurrent Neural Network, RNN)模型求解l1范数最小化优化问题,计算RNN的稳态解以恢复稀疏信号。对不同方法的测试结果表明,提出的方法在恢复稀疏信号时所需的观测点数最少,并且可推广到压缩图像的恢复应用中,获得了更高的信噪比。RNN模型也适合并行实现,通过GPU并行计算获得了超过百倍的加速比。与传统的方法相比,所提出的方法不仅能够更加准确地恢复信号,并具有更强的实时处理能力。  相似文献   

13.
We studied a simple random recurrent inhibitory network. Despite its simplicity, the dynamics was so rich that activity patterns of neurons evolved with time without recurrence due to random recurrent connections among neurons. The sequence of activity patterns was generated by the trigger of an external signal, and the generation was stable against noise. Moreover, the same sequence was reproducible using a strong transient signal, that is, the sequence generation could be reset. Therefore, a time passage from the trigger of an external signal could be represented by the sequence of activity patterns, suggesting that this model could work as an internal clock. The model could generate different sequences of activity patterns by providing different external signals; thus, spatiotemporal information could be represented by this model. Moreover, it was possible to speed up and slow down the sequence generation.  相似文献   

14.
Anticipatory neural activity preceding behaviorally important events has been reported in cortex, striatum, and midbrain dopamine neurons. Whereas dopamine neurons are phasically activated by reward-predictive stimuli, anticipatory activity of cortical and striatal neurons is increased during delay periods before important events. Characteristics of dopamine neuron activity resemble those of the prediction error signal of the temporal difference (TD) model of Pavlovian learning (Sutton & Barto, 1990). This study demonstrates that the prediction signal of the TD model reproduces characteristics of cortical and striatal anticipatory neural activity. This finding suggests that tonic anticipatory activities may reflect prediction signals that are involved in the processing of dopamine neuron activity.  相似文献   

15.
This paper proposes a scale-free highly clustered echo state network (SHESN). We designed the SHESN to include a naturally evolving state reservoir according to incremental growth rules that account for the following features: (1) short characteristic path length, (2) high clustering coefficient, (3) scale-free distribution, and (4) hierarchical and distributed architecture. This new state reservoir contains a large number of internal neurons that are sparsely interconnected in the form of domains. Each domain comprises one backbone neuron and a number of local neurons around this backbone. Such a natural and efficient recurrent neural system essentially interpolates between the completely regular Elman network and the completely random echo state network (ESN) proposed by Jaeger et al. We investigated the collective characteristics of the proposed complex network model. We also successfully applied it to challenging problems such as the Mackey-Glass (MG) dynamic system and the laser time-series prediction. Compared to the ESN, our experimental results show that the SHESN model has a significantly enhanced echo state property and better performance in approximating highly complex nonlinear dynamics. In a word, this large scale dynamic complex network reflects some natural characteristics of biological neural systems in many aspects such as power law, small-world property, and hierarchical architecture. It should have strong computing power, fast signal propagation speed, and coherent synchronization.  相似文献   

16.
Here, formation of continuous attractor dynamics in a nonlinear recurrent neural network is used to achieve a nonlinear speech denoising method, in order to implement robust phoneme recognition and information retrieval. Formation of attractor dynamics in recurrent neural network is first carried out by training the clean speech subspace as the continuous attractor. Then, it is used to recognize noisy speech with both stationary and nonstationary noise. In this work, the efficiency of a nonlinear feedforward network is compared to the same one with a recurrent connection in its hidden layer. The structure and training of this recurrent connection, is designed in such a way that the network learns to denoise the signal step by step, using properties of attractors it has formed, along with phone recognition. Using these connections, the recognition accuracy is improved 21% for the stationary signal and 14% for the nonstationary one with 0db SNR, in respect to a reference model which is a feedforward neural network.  相似文献   

17.
吴浩瀚  金福江  赖联有  汪亮 《自动化学报》2014,40(10):2370-2376
针对一步预测问题,本文提出一种新的自适应滤波算法,该算法通过神经网络来调制薛定谔方程的势场函数.这种算法就是所谓的量子递归神经网络(RQNN),它可以过滤嵌入在真实信号中的非平稳噪声且不需要信号和噪声的任何先验信息.本文通过RQNN与RLS算法的仿真结果比较,表明:RQNN在过滤嵌入在直流信号,正弦信号,阶梯信号和语言信号中的高斯平稳噪声,高斯非平稳噪声或非高斯平稳噪声更准确和有更好的自适应性.实验结果表明:RQNN在过滤正弦信号中的高斯噪声时,输出信噪比相对于输入信噪比提高了20dB,这比RLS滤波器高10dB.  相似文献   

18.

Design of analog modular neuron based on memristor is proposed here. Since neural networks are built by repetition of basic blocks that are called neurons, using modular neurons is essential for the neural network hardware. In this work modularity of the neuron is achieved through distributed neurons structure. Some major challenges in implementation of synaptic operation are weight programmability, weight multiplication by input signal and nonvolatile weight storage. Introduction of memristor bridge synapse addresses all of these challenges. The proposed neuron is a modular neuron based on distributed neuron structure which it uses the benefits of the memristor bridge synapse for synaptic operations. In order to test appropriate operation of the proposed neuron, it is used in a real-world application of neural network. Off-chip method is used to train the neural network. The results show 86.7 % correct classification and about 0.0695 mean square error for 4-5-3 neural network based on proposed modular neuron.

  相似文献   

19.
In this paper, we describe a new Synaptic Plasticity Activity Rule (SAPR) developed for use in networks of spiking neurons. Such networks can be used for simulations of physiological experiments as well as for other computations like image analysis. Most synaptic plasticity rules use artificially defined functions to modify synaptic connection strengths. In contrast, our rule makes use of the existing postsynaptic potential values to compute the value of adjustment. The network of spiking neurons we consider consists of excitatory and inhibitory neurons. Each neuron is implemented as an integrate-and-fire model that accurately mimics the behavior of biological neurons. To test performance of our new plasticity rule we designed a model of a biologically-inspired signal processing system, and used it for object detection in eye images of diabetic retinopathy patients, and lung images of cystic fibrosis patients. The results show that the network detects the edges of objects within an image, essentially segmenting it. Our ultimate goal, however, is not the development of an image segmentation tool that would be more efficient than nonbiological algorithms, but developing a physiologically correct neural network model that could be applied to a wide range of neurological experiments. We decided to validate the SAPR by using it in a network of spiking neurons for image segmentation because it is easy to visually assess the results. An important thing is that image segmentation is done in an entirely unsupervised way.  相似文献   

20.
Tao Ye  Xuefeng Zhu 《Neurocomputing》2011,74(6):906-915
The process neural network (PrNN) is an ANN model suited for solving the learning problems with signal inputs, whose elementary unit is the process neuron (PN), an emerging neuron model. There is an essential difference between the process neuron and traditional neurons, but there also exists a relation between them. The former can be approximated by the latter within any precision. First, the PN model and some PrNNs are introduced in brief. And then, two PN approximating theorems are presented and proved in detail. Each theorem gives an approximating model to the PN model, i.e., the time-domain feature expansion model and the orthogonal decomposition feature expansion model. Some corollaries are given for the PrNNs based on these two theorems. Thereafter, simulation studies are performed on some simulated signal sets and a real dataset. The results show that the PrNN can effectively suppress noises polluting the signals and generalize quite well. Finally some problems on PrNNs are discussed and further research directions are suggested.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号