首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 140 毫秒
1.
Learning in the multiple class random neural network   总被引:3,自引:0,他引:3  
Spiked recurrent neural networks with "multiple classes" of signals have been recently introduced by Gelenbe and Fourneau (1999), as an extension of the recurrent spiked random neural network introduced by Gelenbe (1989). These new networks can represent interconnected neurons, which simultaneously process multiple streams of data such as the color information of images, or networks which simultaneously process streams of data from multiple sensors. This paper introduces a learning algorithm which applies both to recurrent and feedforward multiple signal class random neural networks (MCRNNs). It is based on gradient descent optimization of a cost function. The algorithm exploits the analytical properties of the MCRNN and requires the solution of a system of nC linear and nC nonlinear equations (where C is the number of signal classes and n is the number of neurons) each time the network learns a new input-output pair. Thus, the algorithm is of O([nC]/sup 3/) complexity for the recurrent case, and O([nC]/sup 2/) for a feedforward MCRNN. Finally, we apply this learning algorithm to color texture modeling (learning), based on learning the weights of a recurrent network directly from the color texture image. The same trained recurrent network is then used to generate a synthetic texture that imitates the original. This approach is illustrated with various synthetic and natural textures.  相似文献   

2.
Gelenbe has proposed a neural network, called a Random Neural Network, which calculates the probability of activation of the neurons in the network. In this paper, we propose to solve the patterns recognition problem using a hybrid Genetic/Random Neural Network learning algorithm. The hybrid algorithm trains the Random Neural Network by integrating a genetic algorithm with the gradient descent rule-based learning algorithm of the Random Neural Network. This hybrid learning algorithm optimises the Random Neural Network on the basis of its topology and its weights distribution. We apply the hybrid Genetic/Random Neural Network learning algorithm to two pattern recognition problems. The first one recognises or categorises alphabetic characters, and the second recognises geometric figures. We show that this model can efficiently work as associative memory. We can recognise pattern arbitrary images with this algorithm, but the processing time increases rapidly.  相似文献   

3.
Random neural networks with multiple classes of signals   总被引:3,自引:0,他引:3  
By extending the pulsed recurrent random neural network (RNN) discussed in Gelenbe (1989, 1990, 1991), we propose a recurrent random neural network model in which each neuron processes several distinctly characterized streams of "signals" or data. The idea that neurons may be able to distinguish between the pulses they receive and use them in a distinct manner is biologically plausible. In engineering applications, the need to process different streams of information simultaneously is commonplace (e.g., in image processing, sensor fusion, or parallel processing systems). In the model we propose, each distinct stream is a class of signals in the form of spikes. Signals may arrive to a neuron from either the outside world (exogenous signals) or other neurons (endogenous signals). As a function of the signals it has received, a neuron can fire and then send signals of some class to another neuron or to the outside world. We show that the multiple signal class random model with exponential interfiring times, Poisson external signal arrivals, and Markovian signal movements between neurons has product form; this implies that the distribution of its state (i.e., the probability that each neuron of the network is excited) can be computed simply from the solution of a system of 2Cn simultaneous nonlinear equations where C is the number of signal classes and n is the number of neurons. Here we derive the stationary solution for the multiple class model and establish necessary and sufficient conditions for the existence of the stationary solution. The recurrent random neural network model with multiple classes has already been successfully applied to image texture generation (Atalay & Gelenbe, 1992), where multiple signal classes are used to model different colors in the image.  相似文献   

4.
We propose in this paper an extended model of the random neural networks, whose architecture is multi-feedback. In this case, we suppose different layers where the neurons have communication with the neurons of the neighbor layers. We present its learning algorithm and its possible utilizations; specifically, we test its use in an encryption mechanism where each layer is responsible of a part of the encryption or decryption process. The multilayer random neural network is a stochastic neural model, in this way the entire proposed encryption model has that feature.  相似文献   

5.
Our aim is to build an integrated learning framework of neural network and case-based reasoning. The main idea is that feature weights for case-based reasoning can be evaluated by neural networks. In this paper, we propose MBNR (Memory-Based Neural Reasoning), case-based reasoning with local feature weighting by neural network. In our method, the neural network guides the case-based reasoning by providing case-specific weights to the learning process. We developed a learning algorithm to train the neural network to learn the case-specific local weighting patterns for case-based reasoning. We showed the performance of our learning system using four datasets.  相似文献   

6.
Since Hopfield's seminal work on energy functions for neural networks and their consequence for the approximate solution of optimization problems, much attention has been devoted to neural heuristics for combinatorial optimization. These heuristics are often very time-consuming because of the need for randomization or Monte Carlo simulation during the search for solutions. In this paper, we propose a general energy function for a new neural model, the random neural model of Gelenbe. This model proposes a scheme of interaction between the neurons and not a dynamic equation of the system. Then, we apply this general energy function to different optimization problems.  相似文献   

7.
The fuzzy min–max neural network constitutes a neural architecture that is based on hyperbox fuzzy sets and can be incrementally trained by appropriately adjusting the number of hyperboxes and their corresponding volumes. An extension to this network has been proposed recently, that is based on the notion of random hyperboxes and is suitable for reinforcement learning problems with discrete action space. In this work, we elaborate further on the random hyperbox idea and propose the stochastic fuzzy min–max neural network, where each hyperbox is associated with a stochastic learning automaton. Experimental results using the pole balancing problem indicate that the employment of this model as an action selection network in reinforcement learning schemes leads to superior learning performance compared with the traditional approach where the multilayer perceptron is employed.  相似文献   

8.
In this paper, we propose a neural network modeling scheme for nonlinear systems. The proposed architecture is a new combination of neural network and bilinear system model in which the terms of cross-products of input and output signals within the bilinear model are taken as the inputs into the neural network. Compared with the original bilinear system, this kind of network model possesses much more adjustable parameters to fulfill the system identification. Moreover, instead of the general back-propagation method an evolutionary computation called the differential evolution algorithm is presented to update the network parameters. This algorithm is with multiple direction searches toward the global optimal solution for given optimization problem. To show the feasibility of the proposed scheme, a nonlinear chemical process system of continuously stirred tank reactor is illustrated. Many simulations and examinations are considered to verify the robustness of the proposed neural network structure on the modeling performance, including different sets of initial conditions of the algorithm and model orders.  相似文献   

9.
Mixture of local principal component analysis (PCA) has attracted attention due to a number of benefits over global PCA. The performance of a mixture model usually depends on the data partition and local linear fitting. In this paper, we propose a mixture model which has the properties of optimal data partition and robust local fitting. Data partition is realized by a soft competition algorithm called neural 'gas' and robust local linear fitting is approached by a nonlinear extension of PCA learning algorithm. Based on this mixture model, we describe a modular classification scheme for handwritten digit recognition, in which each module or network models the manifold of one of ten digit classes. Experiments demonstrate a very high recognition rate.  相似文献   

10.
Many underlying relationships among data in several areas of science and engineering, e.g., computer vision, molecular chemistry, molecular biology, pattern recognition, and data mining, can be represented in terms of graphs. In this paper, we propose a new neural network model, called graph neural network (GNN) model, that extends existing neural network methods for processing the data represented in graph domains. This GNN model, which can directly process most of the practically useful types of graphs, e.g., acyclic, cyclic, directed, and undirected, implements a function tau(G,n) isin IRm that maps a graph G and one of its nodes n into an m-dimensional Euclidean space. A supervised learning algorithm is derived to estimate the parameters of the proposed GNN model. The computational cost of the proposed algorithm is also considered. Some experimental results are shown to validate the proposed learning algorithm, and to demonstrate its generalization capabilities.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号