首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we present the feed-forward neural network (FFNN) and recurrent neural network (RNN) models for predicting Boolean function complexity (BFC). In order to acquire the training data for the neural networks (NNs), we conducted experiments for a large number of randomly generated single output Boolean functions (BFs) and derived the simulated graphs for number of min-terms against the BFC for different number of variables. For NN model (NNM) development, we looked at three data transformation techniques for pre-processing the NN-training and validation data. The trained NNMs are used for complexity estimation for the Boolean logic expressions with a given number of variables and sum of products (SOP) terms. Both FFNNs and RNNs were evaluated against the ISCAS benchmark results. Our FFNNs and RNNs were able to predict the BFC with correlations of 0.811 and 0.629 with the benchmark results, respectively.  相似文献   

2.
Biometrics has become one of the most important techniques in recognizing a person’s identity. A person’s face, iris and fingerprint are mostly used in biometrics today. It has been established that there are no two ears exactly alike, even in the cases of identical twins. In this paper, we define a 7-element ear feature set and design and train a feed-forward artificial neural network to recognize a human ear. We train and test the network with 51 ear pictures from 51 different persons. Simulation experiments with various networks with various number of layers and number of neurons per layer and with and without noise are conducted. Results indicate that a 95 % ear recognition accuracy is achieved with a simple 3-layer feed-forward neural network with only a total of 18 neurons even in the presence of some noise. This design outstands previous work in simplicity and implementation cost.  相似文献   

3.
Inaccurate input-output gains (partial derivatives of outputs with respect to inputs) are common in neural network models when input variables are correlated or when data are incomplete or inaccurate. Accurate gains are essential for optimization, control, and other purposes. We develop and explore a method for training feedforward neural networks subject to inequality or equality-bound constraints on the gains of the learned mapping. Gain constraints are implemented as penalty terms added to the objective function, and training is done using gradient descent. Adaptive and robust procedures are devised for balancing the relative strengths of the various terms in the objective function, which is essential when the constraints are inconsistent with the data. The approach has the virtue that the model domain of validity can be extended via extrapolation training, which can dramatically improve generalization. The algorithm is demonstrated here on artificial and real-world problems with very good results and has been advantageously applied to dozens of models currently in commercial use.  相似文献   

4.
Feed-forward neural networks (FFNNs) are among the most important neural networks that can be applied to a wide range of forecasting problems with a high degree of accuracy. Several large-scale forecasting competitions with a large number of commonly used time series forecasting models conclude that combining forecasts from more than one model often leads to improved performance, especially when the models in the ensemble are quite different. In the literature, several hybrid models have been proposed by combining different time series models together. In this paper, in contrast of the traditional hybrid models, a novel hybridization of the feed-forward neural networks (FFNNs) is proposed using the probabilistic neural networks (PNNs) in order to yield more accurate results than traditional feed-forward neural networks. In the proposed model, the estimated values of the FFNN models are modified based on the distinguished trend of their residuals and optimum step length, which are respectively yield from a probabilistic neural network and a mathematical programming model. Empirical results with three well-known real data sets indicate that the proposed model can be an effective way in order to construct a more accurate hybrid model than FFNN models. Therefore, it can be applied as an appropriate alternative model for forecasting tasks, especially when higher forecasting accuracy is needed.  相似文献   

5.
Traditional activation functions such as hyperbolic tangent and logistic sigmoid have seen frequent use historically in artificial neural networks. However, nowadays, in practice, they have fallen out of favor, undoubtedly due to the gap in performance observed in recognition and classification tasks when compared to their well-known counterparts such as rectified linear or maxout. In this paper, we introduce a simple, new type of activation function for multilayer feed-forward architectures. Unlike other approaches where new activation functions have been designed by discarding many of the mainstays of traditional activation function design, our proposed function relies on them and therefore shares most of the properties found in traditional activation functions. Nevertheless, our activation function differs from traditional activation functions on two major points: its asymptote and global extremum. Defining a function which enjoys the property of having a global maximum and minimum, turned out to be critical during our design-process since we believe it is one of the main reasons behind the gap observed in performance between traditional activation functions and their recently introduced counterparts. We evaluate the effectiveness of the proposed activation function on four commonly used datasets, namely, MNIST, CIFAR-10, CIFAR-100, and the Pang and Lee’s movie review. Experimental results demonstrate that the proposed function can effectively be applied across various datasets where our accuracy, given the same network topology, is competitive with the state-of-the-art. In particular, the proposed activation function outperforms the state-of-the-art methods on the MNIST dataset.  相似文献   

6.
This paper presents a general framework for robust adaptive neural network (NN)‐based feedback linearization controller design for greenhouse climate system. The controller is based on the well‐known feedback linearization, combined with radial basis functions NNs, which allows the feedback linearization technique to be used in an adaptive way. In addition, a robust sliding mode control is incorporated to deal with the bounded disturbances and the approximation errors of NNs. As a result, an inherently nonlinear robust adaptive control law is obtained, which not only provides fast and accurate tracking of varying set‐points, but also guarantees asymptotic tracking even if there are inherent approximation errors. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

7.
A neural network (NN) approach to the problem of steropsis is presented. The correspondence problem (finding the correct matches between pixels of the epipolar lines of the stereo pair from among all the possible matches) is posed as a noniterative many-to-one mapping. Two multilayer feedforward NNs are utilized to learn and code this nonlinear and complex mapping using the backpropagation learning rule and a training set. The first NN is a conventional fully connected net while the second is a sparsely connected NN with a fixed number of hidden layer nodes. All the applicable constraints are learned and internally coded by the NNs enabling them to be more flexible and more accurate than previous methods. The approach is successfully tested on several random-dot stereograms. It is shown that the nets can generalize their learned mappings to cases outside their training sets and to noisy images. Advantages over the Marr-Poggio algorithm are discussed, and it is shown that the NNs performances are superior.  相似文献   

8.
Node splitting: A constructive algorithm for feed-forward neural networks   总被引:1,自引:0,他引:1  
A constructive algorithm is proposed for feed-forward neural networks which uses node-splitting in the hidden layers to build large networks from smaller ones. The small network forms an approximate model of a set of training data, and the split creates a larger, more powerful network which is initialised with the approximate solution already found. The insufficiency of the smaller network in modelling the system which generated the data leads to oscillation in those hidden nodes whose weight vectors cover regions in the input space where more detail is required in the model. These nodes are identified and split in two using principal component analysis, allowing the new nodes to cover the two main modes of the oscillating vector. Nodes are selected for splitting using principal component analysis on the oscillating weight vectors, or by examining the Hessian matrix of second derivatives of the network error with respect to the weights.  相似文献   

9.
Diagonal recurrent neural networks for dynamic systems control   总被引:48,自引:0,他引:48  
A new neural paradigm called diagonal recurrent neural network (DRNN) is presented. The architecture of DRNN is a modified model of the fully connected recurrent neural network with one hidden layer, and the hidden layer comprises self-recurrent neurons. Two DRNN's are utilized in a control system, one as an identifier called diagonal recurrent neuroidentifier (DRNI) and the other as a controller called diagonal recurrent neurocontroller (DRNC). A controlled plant is identified by the DRNI, which then provides the sensitivity information of the plant to the DRNC. A generalized dynamic backpropagation algorithm (DBP) is developed and used to train both DRNC and DRNI. Due to the recurrence, the DRNN can capture the dynamic behavior of a system. To guarantee convergence and for faster learning, an approach that uses adaptive learning rates is developed by introducing a Lyapunov function. Convergence theorems for the adaptive backpropagation algorithms are developed for both DRNI and DRNC. The proposed DRNN paradigm is applied to numerical problems and the simulation results are included.  相似文献   

10.
Abstract People have unique ways of learning, which may greatly affect the learning process and, therefore, its outcome. In order to be effective, e-learning systems should be capable of adapting the content of courses to the individual characteristics of students. In this regard, some educational systems have proposed the use of questionnaires for determining a student learning style; and then adapting their behaviour according to the students' styles. However, the use of questionnaires is shown to be not only a time-consuming investment but also an unreliable method for acquiring learning style characterisations. In this paper, we present an approach to recognize automatically the learning styles of individual students according to the actions that he or she has performed in an e-learning environment. This recognition technique is based upon feed-forward neural networks.  相似文献   

11.
A new approach of constructing and training neural networks for pattern classification is proposed. Data clusters are generated and trained sequentially based on distinct local subsets of the training data. Obtained clusters are then used to construct a feed-forward network, which is further trained using standard algorithms operating on the global training set. The network obtained using this approach effectively inherits the knowledge from the local training procedure before improving on its generalization ability through the subsequent global training. Various experiments demonstrate the superiority of this approach over competing methods.  相似文献   

12.
Rule revision with recurrent neural networks   总被引:2,自引:0,他引:2  
Recurrent neural networks readily process, recognize and generate temporal sequences. By encoding grammatical strings as temporal sequences, recurrent neural networks can be trained to behave like deterministic sequential finite-state automata. Algorithms have been developed for extracting grammatical rules from trained networks. Using a simple method for inserting prior knowledge (or rules) into recurrent neural networks, we show that recurrent neural networks are able to perform rule revision. Rule revision is performed by comparing the inserted rules with the rules in the finite-state automata extracted from trained networks. The results from training a recurrent neural network to recognize a known non-trivial, randomly-generated regular grammar show that not only do the networks preserve correct rules but that they are able to correct through training inserted rules which were initially incorrect (i.e. the rules were not the ones in the randomly generated grammar)  相似文献   

13.
For optimum statistical classification and generalization with single hidden-layer neural network models, two tasks must be performed: (a) learning the best set of weights for a network of k hidden units and (b) determining k, the best complexity fit. We contrast two approaches to construction of neural network classifiers: (a) standard back-propagation as applied to a series of single hidden-layer feed-forward nerual networks with differing number of hidden units and (b) a heuristic cascade-correlation approach that quickly and dynamically configures the hidden units in a network and learns the best set of weights for it. Four real-world applications are considered. On these examples, the back-propagation approach yielded somewhat better results, but with far greater computation times. The best complexity fit, k, for both approaches were quite similar. This suggests a hybrid approach to constructing single hidden-layer feed-forward neural network classifiers in which the number of hidden units is determined by cascade-correlation and the weights are learned by back-propagation.  相似文献   

14.
This paper presents a discrete-time direct current (DC) motor torque tracking controller, based on a recurrent high-order neural network to identify the plant model. In order to train the neural identifier, the extended Kalman filter (EKF) based training algorithm is used. The neural identifier is in series-parallel configuration that constitutes a well approximation method of the real plant by the neural identifier. Using the neural identifier structure that is in the nonlinear controllable form, the block control (BC) combined with sliding modes (SM) control techniques in discrete-time are applied. The BC technique is used to design a nonlinear sliding manifold such that the resulting sliding mode dynamics are described by a desired linear system. For the SM control technique, the equivalent control law is used in order to the plant output tracks a reference signal. For reducing the effect of unknown terms, it is proposed a specific desired dynamics for the sliding variables. The control problem is solved by the indirect approach, where an appropriate neural network (NN) identification model is selected; the NN parameters (synaptic weights) are adjusted according to a specific adaptive law (EKF), such that the response of the NN identifier approximates the response of the real plant for the same input. Then, based on the designed NN identifier a stabilizing or reference tracking controller is proposed (BC combined with SM). The proposed neural identifier and control applicability are illustrated by torque trajectory tracking for a DC motor with separate winding excitation via real-time implementation.  相似文献   

15.
This paper presents a novel approach in designing adaptive controller to improve the transient performance for a class of nonlinear discrete-time systems under different operating modes. The proposed scheme consists of generalized minimum variance (GMV) controllers and a compensating controller. GMV controllers are based on the known nominal linear multiple models, while the compensating controller is based upon a recurrent neural network. The adaptation law of network weight is derived from Lyapunov stability theory. A suitable switching control strategy is applied to choose the best controller by the performance indices at every sampling instant. Simulations are discussed in order to illustrate the merits of the proposed method.  相似文献   

16.
一种快速对角回归神经网络控制算法   总被引:4,自引:0,他引:4  
文[1]定理1给出了一个基于Lyapunov函数的三层对角回归神经网络(DRNN)任意权参数学习速率的自适应调整算法, 而推导各层权自适应学习速率时没有严格满足定理1成立的必要条件, 故没能找到各学习速率的准确范围. 依据文[1]定理1,精确给出了各权向量及权矩阵学习速率的调整算法, 结果表明DRNN应具有更大的学习速率, 对应更加快速的收敛算法. 给出了相应的仿真结果.  相似文献   

17.
The potential of using artificially simulated neural networks as intelligent, adaptive process-monitoring devices is discussed. The investigation is considered as a method for automatic, intelligent exception reporting for quality control applications. The technique is also compared with the conventional statistical approaches of principal component analysis and Kohonen's feature map. The applications of the technique in aerospace and manufacturing environments are presented and a possible extension of the method to incorporate a diagnostic function is discussed.Seconded from Cheltenham and Gloucester College of Higher Education as a Royal Society/SERC Research Fellow at Smith's Industries Aerospace and Defence Systems, Bishop's Cleeve, Cheltenham, UK.  相似文献   

18.
In this paper, stable indirect adaptive control with recurrent neural networks is presented for square multivariable non-linear plants with unknown dynamics. The control scheme is made of an adaptive instantaneous neural model, a neural controller based on fully connected “Real-Time Recurrent Learning” (RTRL) networks and an online parameters updating law. Closed-loop performances as well as sufficient conditions for asymptotic stability are derived from the Lyapunov approach according to the adaptive updating rate parameter. Robustness is also considered in terms of sensor noise and model uncertainties. The control scheme is then applied to the Tennessee Eastman Challenge Process in order to illustrate the efficiency of the proposed method for real-world control problems.  相似文献   

19.
蜜蜂群优化算法用于训练前馈神经网络   总被引:4,自引:0,他引:4       下载免费PDF全文
训练人工神经网络的目的是调整各层的权重系数以达到最优,因而训练过程的实质是一项优化任务。传统的训练算法存在着容易陷入局部最优、计算复杂等缺陷。介绍一种训练前馈神经网络的蜜蜂群优化算法,它是一种简单、鲁棒性强的群体智能随机优化算法。该算法把探查和开发过程有效地结合在一起,并采取了跳出局部最优的搜索策略。成功地把该算法应用于神经网络训练的基本问题:异或问题、N位奇偶校验和编码解码问题,并与传统的BP算法进行比较。仿真实验证明其性能较传统的GD算法和LM算法更为优越。  相似文献   

20.
Feed-forward neural networks (FFNs) have gained a lot of interest in the last decade as empirical models for their powerful representational capacity, non-parametric nature and multivariate characteristics. While these neural network models focus primarily on accurate prediction of output values, often outperforming their statistical counterparts in dealing with sparse date sets, they usually do not provide any information regarding the confidence with which they make these predictions. Since prediction limits (PLs) indicate the extent to which one can rely on predictions for making future decisions, it is of paramount importance to estimate these limits. Two empirical PL estimation methods for FFNs are presented. The two methods differ in one fundamental aspect: the method employed for modeling the properties of the neural network model residuals. While one method uses a local approximation scheme, the other utilizes a global approximation scheme. Simulation results reveal that both methods have their relative strengths and weaknesses.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号