共查询到20条相似文献,搜索用时 0 毫秒
1.
The authors are concerned with how one can design, realize, and analyze networks that embody the specific computational structures needed to solve hard problems. They focus on the design and use of massively parallel connectionist computational models, particularly in artificial intelligence. They describe a computing environment for working with structured networks and present some sample applications. Throughout, they treat adaptation and learning as ways to improve structured networks, not as replacements for analysis and design 相似文献
2.
This paper proposes a new method to model partially connected feedforward neural networks (PCFNNs) from the identified input type (IT) which refers to whether each input is coupled with or uncoupled from other inputs in generating output. The identification is done by analyzing input sensitivity changes as amplifying the magnitude of inputs. The sensitivity changes of the uncoupled inputs are not correlated with the variation on any other input, while those of the coupled inputs are correlated with the variation on any one of the coupled inputs. According to the identified ITs, a PCFNN can be structured. Each uncoupled input does not share the neurons in the hidden layer with other inputs in order to contribute to output in an independent manner, while the coupled inputs share the neurons with one another. After deriving the mathematical input sensitivity analysis for each IT, several experiments, as well as a real example (blood pressure (BP) estimation), are described to demonstrate how well our method works. 相似文献
3.
The model of attractor neural network on the small-world topology (local and random connectivity) is investigated. The synaptic weights are random, driving the network towards a disordered state for the neural activity. An ordered macroscopic neural state is induced by a bias in the network weight connections, and the network evolution when initialized in blocks of positive/negative activity is studied. The retrieval of the block-like structure is investigated. An application to the Hebbian learning of a pattern, carrying local information, is presented. The block and the global attractor compete according to the initial conditions and the change of stability from one to the other depends on the long-range character of the network connectivity, as shown with a flow-diagram analysis. Moreover, a larger number of blocks emerges with the network dilution. 相似文献
4.
In this paper, we elaborate upon the claim that clustering in the recurrent layer of recurrent neural networks (RNNs) reflects meaningful information processing states even prior to training. By concentrating on activation clusters in RNNs, while not throwing away the continuous state space network dynamics, we extract predictive models that we call neural prediction machines (NPMs). When RNNs with sigmoid activation functions are initialized with small weights (a common technique in the RNN community), the clusters of recurrent activations emerging prior to training are indeed meaningful and correspond to Markov prediction contexts. In this case, the extracted NPMs correspond to a class of Markov models, called variable memory length Markov models (VLMMs). In order to appreciate how much information has really been induced during the training, the RNN performance should always be compared with that of VLMMs and NPMs extracted before training as the "" base models. Our arguments are supported by experiments on a chaotic symbolic sequence and a context-free language with a deep recursive structure. 相似文献
5.
High-throughput implementations of neural network models are required to transfer the technology from small prototype research problems into large-scale "real-world" applications. The flexibility of these implementations in accommodating for modifications to the neural network computation and structure is of paramount importance. The performance of many implementation methods today is greatly dependent on the density and the interconnection structure of the neural network model being implemented. A principal contribution of this paper is to demonstrate an implementation method which exploits maximum amount of parallelism from neural computation, without enforcing stringent conditions on the neural network interconnection structure, to achieve this high implementation efficiency. We propose a new reconfigurable parallel processing architecture, the Dynamically Reconfigurable Extended Array Multiprocessor (DREAM) machine, and an associated mapping method for implementing neural networks with regular interconnection structures. Details of the system execution rate calculation as a function of the neural network structure are presented. Several example neural network structures are used to demonstrate the efficiency of our mapping method and the DREAM machine architecture on implementing diverse interconnection structures. We show that due to the reconfigurable nature of the DREAM machine, most of the available parallelism of neural networks can be efficiently exploited. 相似文献
6.
Currently, most learning algorithms for neural-network modeling are based on the output error approach, using a least squares cost function. This method provides good results when the network is trained with noisy output data and known inputs. Special care must be taken, however, when training the network with noisy input data, or when both inputs and outputs contain noise. This paper proposes a novel cost function for learning NN with noisy inputs, based on the errors-in-variables stochastic framework. A learning scheme is presented and examples are given demonstrating the improved performance in neural-network curve fitting, at the cost of increased computation time. 相似文献
7.
We show that an n-neuron cellular neural network with time-varying delay can have 2(n) periodic orbits located in saturation regions and these periodic orbits are locally exponentially attractive. In addition, we give some conditions for ascertaining periodic orbits to be locally or globally exponentially attractive and allow them to locate in any designated region. As a special case of exponential periodicity, exponential stability of delayed cellular neural networks is also characterized. These conditions improve and extend the existing results in the literature. To illustrate and compare the results, simulation results are discussed in three numerical examples. 相似文献
8.
对角神经网络(DRNN)为非全反馈式动态神经网络。应用DRNN对处于静止和运动状态下的环形激光陀螺(RLG)进行了消噪建模,并应用A llan方差方法对消噪后的结果进行了对比分析。结果表明:使用DRNN对RLG进行消噪建模是可行的。同时,将DRNN与反向传播神经网络的消噪结果进行了比较,得到动态网络的消噪能力要优于静态网络的结论。所用方法对研究RLG的误差补偿及快速启动是有实际意义的。 相似文献
9.
A novel approach to object recognition and scene analysis based on neural network representation of visual schemas is described.
Given an input scene, the VISOR system focuses attention successively at each component, and the schema representations cooperate
and compete to match the inputs. The schema hierarchy is learned from examples through unsupervised adaptation and reinforcement
learning. VISOR learns that some objects are more important than others in identifying the scene, and that the importance
of spatial relations varies depending on the scene. As the inputs differ increasingly from the schemas, VISOR's recognition
process is remarkably robust, and automatically generates a measure of confidence in the analysis. 相似文献
10.
Parallel, self-organizing, hierarchical neural networks (PSHNN's) are multistage networks in which stages operate in parallel rather than in series during testing. Each stage can be any particular type of network. Previous PSHNN's assume quantized, say, binary outputs. A new type of PSHNN is discussed such that the outputs are allowed to be continuous-valued. The performance of the resulting networks is tested in the problem of predicting speech signal samples from past samples. Three types of networks in which the stages are learned by the delta rule, sequential least-squares, and the backpropagation (BP) algorithm, respectively, are described. In all cases studied, the new networks achieve better performance than linear prediction. A revised BP algorithm is discussed for learning input nonlinearities. When the BP algorithm is to be used, better performance is achieved when a single BP network is replaced by a PSHNN of equal complexity in which each stage is a BP network of smaller complexity than the single BP network. 相似文献
11.
In this study, CPBUM neural networks with annealing robust learning algorithm (ARLA) are proposed to improve the problems of conventional neural networks for modeling with outliers and noise. In general, the obtained training data in the real applications maybe contain the outliers and noise. Although the CPBUM neural networks have fast convergent speed, these are difficult to deal with outliers and noise. Hence, the robust property must be enhanced for the CPBUM neural networks. Additionally, the ARLA can be overcome the problems of initialization and cut-off points in the traditional robust learning algorithm and deal with the model with outliers and noise. In this study, the ARLA is used as the learning algorithm to adjust the weights of the CPBUM neural networks. It tunes out that the CPBUM neural networks with the ARLA have fast convergent speed and robust against outliers and noise than the conventional neural networks with robust mechanism. Simulation results are provided to show the validity and applicability of the proposed neural networks. 相似文献
12.
We study both analytically and numerically the effect of presynaptic noise on the transmission of information in attractor neural networks. The noise occurs on a very short timescale compared to that for the neuron dynamics and it produces short-time synaptic depression. This is inspired in recent neurobiological findings that show that synaptic strength may either increase or decrease on a short timescale depending on presynaptic activity. We thus describe a mechanism by which fast presynaptic noise enhances the neural network sensitivity to an external stimulus. The reason is that, in general, presynaptic noise induces nonequilibrium behavior and, consequently, the space of fixed points is qualitatively modified in such a way that the system can easily escape from the attractor. As a result, the model shows, in addition to pattern recognition, class identification and categorization, which may be relevant to the understanding of some of the brain complex tasks. 相似文献
13.
Sperduti and Starita proposed a new type of neural network which consists of generalized recursive neurons for classification of structures. In this paper, we propose an entropy-based approach for constructing such neural networks for classification of acyclic structured patterns. Given a classification problem, the architecture, i.e., the number of hidden layers and the number of neurons in each hidden layer, and all the values of the link weights associated with the corresponding neural network are automatically determined. Experimental results have shown that the networks constructed by our method can have a better performance, with respect to network size, learning speed, or recognition accuracy, than the networks obtained by other methods. 相似文献
14.
In this paper, we show that noise injection into inputs in unsupervised learning neural networks does not improve their performance as it does in supervised learning neural networks. Specifically, we show that training noise degrades the classification ability of a sparsely connected version of the Hopfield neural network, whereas the performance of a sparsely connected winner-take-all neural network does not depend on the injected training noise. 相似文献
15.
This paper deals with the pth moment synchronization problem for a type of the stochastic neural networks with Markov switched parameters and driven by fractional Brownian noise (FBNSNN). A method called time segmentation method, very different to the Lyapunov functional approach, has been presented to solve the above problem. Meanwhile, based on the trajectory of error system, associating with infinitesimal operator theory, we propose a sufficient condition of consensus for the drive–response system. The criterion of pth moment exponential stability for FBNSNN can guarantee the synchronization under the designed controller. Finally, two numerical examples and some illustrative figures are provided to show the feasibility and effectiveness for our theoretical results. 相似文献
16.
In this paper, a novel adaptive noise cancellation algorithm using enhanced dynamic fuzzy neural networks (EDFNNs) is described. In the proposed algorithm, termed EDFNN learning algorithm, the number of radial basis function (RBF) neurons (fuzzy rules) and input-output space clustering is adaptively determined. Furthermore, the structure of the system and the parameters of the corresponding RBF units are trained online automatically and relatively rapid adaptation is attained. By virtue of the self-organizing mapping (SOM) and the recursive least square error (RLSE) estimator techniques, the proposed algorithm is suitable for real-time applications. Results of simulation studies using different noise sources and noise passage dynamics show that superior performance can be achieved. 相似文献
17.
Social networking platforms have witnessed tremendous growth of textual, visual, audio, and mix-mode contents for expressing the views or opinions. Henceforth, Sentiment Analysis (SA) and Emotion Detection (ED) of various social networking posts, blogs, and conversation are very useful and informative for mining the right opinions on different issues, entities, or aspects. The various statistical and probabilistic models based on lexical and machine learning approaches have been employed for these tasks. The emphasis was given to the improvement in the contemporary tools, techniques, models, and approaches, are reflected in majority of the literature. With the recent developments in deep neural networks, various deep learning models are being heavily experimented for the accuracy enhancement in the aforementioned tasks. Recurrent Neural Network (RNN) and its architectural variants such as Long Short Term Memory (LSTM) and Gated Recurrent Unit (GRU) comprise an important category of deep neural networks, basically adapted for features extraction in the temporal and sequential inputs. Input to SA and related tasks may be visual, textual, audio, or any combination of these, consisting of an inherent sequentially, we critically investigate the role of sequential deep neural networks in sentiment analysis of multimodal data. Specifically, we present an extensive review over the applicability, challenges, issues, and approaches for textual, visual, and multimodal SA using RNN and its architectural variants. 相似文献
18.
This paper presents a novel learning algorithm of fuzzy perceptron neural networks (FPNNs) for classifiers that utilize expert knowledge represented by fuzzy IF-THEN rules as well as numerical data as inputs. The conventional linear perceptron network is extended to a second-order one, which is much more flexible for defining a discriminant function. In order to handle fuzzy numbers in neural networks, level sets of fuzzy input vectors are incorporated into perceptron neural learning. At different levels of the input fuzzy numbers, updating the weight vector depends on the minimum of the output of the fuzzy perceptron neural network and the corresponding nonfuzzy target output that indicates the correct class of the fuzzy input vector. This minimum is computed efficiently by employing the modified vertex method. Moreover, the fuzzy pocket algorithm is introduced into our fuzzy perceptron learning scheme to solve the nonseparable problems. Simulation results demonstrate the effectiveness of the proposed FPNN model 相似文献
19.
A global optimization strategy for training adaptive systems such as neural networks and adaptive filters (finite or infinite impulse response) is proposed. Instead of adding random noise to the weights as proposed in the past, additive random noise is injected directly into the desired signal. Experimental results show that this procedure also speeds up greatly the backpropagation algorithm. The method is very easy to implement in practice, preserving the backpropagation algorithm and requiring a single random generator with a monotonically decreasing step size per output channel. Hence, this is an ideal strategy to speed up supervised learning, and avoid local minima entrapment when the noise variance is appropriately scheduled. 相似文献
20.
Neural Computing and Applications - When confronting a spatio-temporal regression, it is sensible to feed the model with any available prior information about the spatial dimension. For example, it... 相似文献
|