首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
An important issue in the design and implementation of a neural network is the sensitivity of its output to input and weight perturbations. In this paper, we discuss the sensitivity of the most popular and general feedforward neural networks-multilayer perceptron (MLP). The sensitivity is defined as the mathematical expectation of the output errors of the MLP due to input and weight perturbations with respect to all input and weight values in a given continuous interval. The sensitivity for a single neuron is discussed first and an analytical expression that is a function of the absolute values of input and weight perturbations is approximately derived. Then an algorithm is given to compute the sensitivity for the entire MLP. As intuitively expected, the sensitivity increases with input and weight perturbations, but the increase has an upper bound that is determined by the structural configuration of the MLP, namely the number of neurons per layer and the number of layers. There exists an optimal value for the number of neurons in a layer, which yields the highest sensitivity value. The effect caused by the number of layers is quite unexpected. The sensitivity of a neural network may decrease at first and then almost keeps constant while the number increases.  相似文献   

2.
Most neural network models can work accurately on their trained samples, but when encountering noise, there could be significant errors if the trained neural network is not robust enough to resist the noise. Sensitivity to perturbation in the control signal due to noise is very important for the prediction of an output signal. The goal of this paper is to provide a methodology of signal sensitivity analysis in order to enable the selection of an ideal Multi-Layer Perception (MLP) neural network model from a group of MLP models with different parameters, i.e. to get a highly accurate and robust model for control problems. This paper proposes a signal sensitivity which depends upon the variance of the output error due to noise in the input signals of a single output MLP with differentiable activation functions. On the assumption that noise arises from additive/multiplicative perturbations, the signal sensitivity of the MLP model can be easily calculated, and a method of lowering the sensitivity of the MLP model is proposed. A control system of a magnetorheological (MR) fluid damper, which is a relatively new type of device that shows the future promise for the control of vibration, is modelled by MLP. A large number of simulations on the MR damper’s MLP model show that a much better model is selected using the proposed method.  相似文献   

3.
Sensitivity analysis on a neural network is mainly investigated after the network has been designed and trained. Very few have considered this as a critical issue prior to network design. Piche's statistical method (1992, 1995) is useful for multilayer perceptron (MLP) design, but too severe limitations are imposed on both input and weight perturbations. This paper attempts to generalize Piche's method by deriving an universal expression of MLP sensitivity for antisymmetric squashing activation functions, without any restriction on input and output perturbations. Experimental results which are based on, a three-layer MLP with 30 nodes per layer agree closely with our theoretical investigations. The effects of the network design parameters such as the number of layers, the number of neurons per layer, and the chosen activation function are analyzed, and they provide useful information for network design decision-making. Based on the sensitivity analysis of MLP, we present a network design method for a given application to determine the network structure and estimate the permitted weight range for network training.  相似文献   

4.
This study proposed supervised learning probabilistic neural networks (SLPNN) which have three kinds of network parameters: variable weights representing the importance of input variables, the reciprocal of kernel radius representing the effective range of data, and data weights representing the data reliability. These three kinds of parameters can be adjusted through training. We tested three artificial functions as well as 15 benchmark problems, and compared it with multi-layered perceptron (MLP) and probabilistic neural networks (PNN). The results showed that SLPNN is slightly more accurate than MLP, and much more accurate than PNN. Besides, the data weights can find the noise data in data set, and the variable weights can measure the importance of input variables and have the greatest contribution to accuracy of model among the three kinds of network parameters.  相似文献   

5.
A novel structure for radial basis function networks is proposed. In this structure, unlike traditional RBF, we set some weights between input and hidden layer. These weights, which take values around unity, are multiplication factors for input vector and perform a linear mapping. Doing this, we increase free parameters of the network, but since these weights are trainable, the overall performance of the network is improved significantly. According to the new weight vector, we called this structure Weighted RBF or WRBF. Weight adjustment formula is provided by applying the gradient descent algorithm. Two classification problems used to evaluate performance of the new RBF network: letter classification using UCI dataset with 16 features, a difficult problem, and digit recognition using HODA dataset with 64 features, an easy problem. WRBF is compared with classic RBF and MLP network, and our experiments show that WRBF outperforms both significantly. For example, in the case of 200 hidden neurons, WRBF achieved recognition rate of 92.78% on UCI dataset while RBF and MLP achieved 83.13 and 89.25% respectively. On HODA dataset, WRBF reached 97.94% recognition rate whereas RBF achieved 97.14%, and MLP accomplished 97.63%.  相似文献   

6.
The sensitivity of a neural network's output to its input perturbation is an important issue with both theoretical and practical values. In this article, we propose an approach to quantify the sensitivity of the most popular and general feedforward network: multilayer perceptron (MLP). The sensitivity measure is defined as the mathematical expectation of output deviation due to expected input deviation with respect to overall input patterns in a continuous interval. Based on the structural characteristics of the MLP, a bottom-up approach is adopted. A single neuron is considered first, and algorithms with approximately derived analytical expressions that are functions of expected input deviation are given for the computation of its sensitivity. Then another algorithm is given to compute the sensitivity of the entire MLP network. Computer simulations are used to verify the derived theoretical formulas. The agreement between theoretical and experimental results is quite good. The sensitivity measure can be used to evaluate the MLP's performance.  相似文献   

7.
The selection of weight accuracies for Madalines   总被引:4,自引:0,他引:4  
The sensitivity of the outputs of a neural network to perturbations in its weights is an important consideration in both the design of hardware realizations and in the development of training algorithms for neural networks. In designing dense, high-speed realizations of neural networks, understanding the consequences of using simple neurons with significant weight errors is important. Similarly, in developing training algorithms, it is important to understand the effects of small weight changes to determine the required precision of the weight updates at each iteration. In this paper, an analysis of the sensitivity of feedforward neural networks (Madalines) to weight errors is considered. We focus our attention on Madalines composed of sigmoidal, threshold, and linear units. Using a stochastic model for weight errors, we derive simple analytical expressions for the variance of the output error of a Madaline. These analytical expressions agree closely with simulation results. In addition, we develop a technique for selecting the appropriate accuracy of the weights in a neural network realization. Using this technique, we compare the required weight precision for threshold versus sigmoidal Madalines. We show that for a given desired variance of the output error, the weights of a threshold Madaline must be more accurate.  相似文献   

8.
神经网络模型的透明化及输入变量约简   总被引:1,自引:0,他引:1  
由于神经网络很容易实现从输入空间到输出空间的非线性映射,因此,神经网络应用者往往未考虑输入变量和输出变量之间的相关性,直接用神经网络来实现输入变量与输出变量之间的黑箱建模,致使模型中常存在冗余变量,并造成模型可靠性和鲁棒性差。提出一种透明化神经网络黑箱特性的方法,并用它剔除模型中的冗余变量。该方法首先利用神经网络释义图可视化网络;再利用连接权法计算神经网络输入变量的相对贡献率,判断其对输出变量的重要性;最后利用改进的随机化测验对连接权和输入变量贡献率进行显著性检验,修剪模型,并以综合贡献度和相对贡献率均不显著的输入变量的交集为依据,剔除冗余变量,实现NN模型透明化及变量选择。实验结果表明,该方法增加了模型的透明度,选择出了最佳输入变量,剔除了冗余输入变量,提高了模型的可靠性和鲁棒性。因此,该研究为神经网络模型的透明化及变量约简提供了一种新的方法。  相似文献   

9.
Topology constraint free fuzzy gated neural networks for patternrecognition   总被引:1,自引:0,他引:1  
A novel topology constraint free neural network architecture using a generalized fuzzy gated neuron model is presented for a pattern recognition task. The main feature is that the network does not require weight adaptation at its input and the weights are initialized directly from the training pattern set. The elimination of the need for iterative weight adaptation schemes facilitates quick network set up times which make the fuzzy gated neural networks very attractive. The performance of the proposed network is found to be functionally equivalent to spatio-temporal feature maps under a mild technical condition. The classification performance of the fuzzy gated neural network is demonstrated on a 12-class synthetic three dimensional (3-D) object data set, real-world eight-class texture data set, and real-world 12 class 3-D object data set. The performance results are compared with the classification accuracies obtained from a spatio-temporal feature map, an adaptive subspace self-organizing map, multilayer feedforward neural networks, radial basis function neural networks, and linear discriminant analysis. Despite the network's ability to accurately classify seen data and adequately generalize validation data, its performance is found to be sensitive to noise perturbations due to fine fragmentation of the feature space. This paper also provides partial solutions to the above robustness issue by proposing certain improvements to various modules of the proposed fuzzy gated neural network.  相似文献   

10.
讨论了具有非线性、不确定特性的织物染色配色过程建模与仿真问题。针对传统的织物染色配色方法效果差、精确度不高和难以达到期望结果的问题,结合MLP神经网络的特点,提出了基于OWO-HWO算法训练的MLP神经网络,同时分别优化网络输入层到隐层和隐层到输出层的权值,并利用基于OWO-HWO算法的MLP神经网络建立织物染色配色模型。针对此种模型,利用NuMap神经网络软件进行仿真实验。仿真结果表明,该配色模型收敛速度快,精确度高,在解决织物染色配色问题上取得了令人满意的配色效果。  相似文献   

11.
A technique for modeling the multilayer perceptron (MLP) neural network, in which input and hidden units are represented by polynomial basis functions (PBFs), is presented. The MLP output is expressed as a linear combination of the PBFs and can therefore be expressed as a polynomial function of its inputs. Thus, the MLP is isomorphic to conventional polynomial discriminant classifiers or Volterra filters. The modeling technique was successfully applied to several trained MLP networks.  相似文献   

12.
An analysis of the influence of weight and input perturbations in a multilayer perceptron (MLP) is made in this article. Quantitative measurements of fault tolerance, noise immunity, and generalization ability are provided. From the expressions obtained, it is possible to justify some previously reported conjectures and experimentally obtained results (e.g., the influence of weight magnitudes, the relation between training with noise and the generalization ability, the relation between fault tolerance and the generalization ability). The measurements introduced here are explicitly related to the mean squared error degradation in the presence of perturbations, thus constituting a selection criterion between different alternatives of weight configurations. Moreover, they allow us to predict the degradation of the learning performance of an MLP when its weights or inputs are deviated from their nominal values and thus, the behavior of a physical implementation can be evaluated before the weights are mapped on it according to its accuracy.  相似文献   

13.
Wang Y  Zeng X  Yeung DS  Peng Z 《Neural computation》2006,18(11):2854-2877
The sensitivity of a neural network's output to its input and weight perturbations is an important measure for evaluating the network's performance. In this letter, we propose an approach to quantify the sensitivity of Madalines. The sensitivity is defined as the probability of output deviation due to input and weight perturbations with respect to overall input patterns. Based on the structural characteristics of Madalines, a bottom-up strategy is followed, along which the sensitivity of single neurons, that is, Adalines, is considered first and then the sensitivity of the entire Madaline network. By means of probability theory, an analytical formula is derived for the calculation of Adalines' sensitivity, and an algorithm is designed for the computation of Madalines' sensitivity. Computer simulations are run to verify the effectiveness of the formula and algorithm. The simulation results are in good agreement with the theoretical results.  相似文献   

14.
A new scheme of knowledge-based classification and rule generation using a fuzzy multilayer perceptron (MLP) is proposed. Knowledge collected from a data set is initially encoded among the connection weights in terms of class a priori probabilities. This encoding also includes incorporation of hidden nodes corresponding to both the pattern classes and their complementary regions. The network architecture, in terms of both links and nodes, is then refined during training. Node growing and link pruning are also resorted to. Rules are generated from the trained network using the input, output, and connection weights in order to justify any decision(s) reached. Negative rules corresponding to a pattern not belonging to a class can also be obtained. These are useful for inferencing in ambiguous cases. Results on real life and synthetic data demonstrate that the speed of learning and classification performance of the proposed scheme are better than that obtained with the fuzzy and conventional versions of the MLP (involving no initial knowledge encoding). Both convex and concave decision regions are considered in the process.  相似文献   

15.
Multilayer perceptron, fuzzy sets, and classification   总被引:8,自引:0,他引:8  
A fuzzy neural network model based on the multilayer perceptron, using the backpropagation algorithm, and capable of fuzzy classification of patterns is described. The input vector consists of membership values to linguistic properties while the output vector is defined in terms of fuzzy class membership values. This allows efficient modeling of fuzzy uncertain patterns with appropriate weights being assigned to the backpropagated errors depending upon the membership values at the corresponding outputs. During training, the learning rate is gradually decreased in discrete steps until the network converges to a minimum error solution. The effectiveness of the algorithm is demonstrated on a speech recognition problem. The results are compared with those of the conventional MLP, the Bayes classifier, and other related models.  相似文献   

16.
A novel approach is presented to visualize and analyze decision boundaries for feedforward neural networks. First order sensitivity analysis of the neural network output function with respect to input perturbations is used to visualize the position of decision boundaries over input space. Similarly, sensitivity analysis of each hidden unit activation function reveals which boundary is implemented by which hidden unit. The paper shows how these sensitivity analysis models can be used to better understand the data being modelled, and to visually identify irrelevant input and hidden units.  相似文献   

17.
In this study we investigate a hybrid neural network architecture for modelling purposes. The proposed network is based on the multilayer perceptron (MLP) network. However, in addition to the usual hidden layers the first hidden layer is selected to be a centroid layer. Each unit in this new layer incorporates a centroid that is located somewhere in the input space. The output of these units is the Euclidean distance between the centroid and the input. The centroid layer clearly resembles the hidden layer of the radial basis function (RBF) networks. Therefore the centroid based multilayer perceptron (CMLP) networks can be regarded as a hybrid of MLP and RBF networks. The presented benchmark experiments show that the proposed hybrid architecture is able to combine the good properties of MLP and RBF networks resulting fast and efficient learning, and compact network structure.  相似文献   

18.
Control of the synchronous generator, also referred to as an alternator, has always remained very significant in power system operation and control. Alternator output is proportional to load angle, but as the parameter is moved up, the power system security approaches the extreme limit. Hence, generators are operated well below their steady state stability limit for the secure operation of a power system. This raises demand for efficient and fast controllers. Artificial intelligence, specifically artificial neural network (ANN), is emerging very rapidly and has become an efficient tool for operation and control of power systems. ANN requires considerable time to tune weights, but it is fast and accurate once tuned properly. Previously, ANNs have been trained with high-dimensional input space or have been trained online. Hence, either one requires considerable time to yield the control signal or is a bit risky technique to apply in interconnected power systems. In this study, a multilayer perceptron (MLP) ANN is proposed to control generator excitation trained with low-dimensional input space. Moreover, MLP has been trained offline to avert the risk potential of online training. The results illustrate preeminence of the proposed neurocontroller-based excitation system over the conventional controllers-based excitation system.  相似文献   

19.
A new adaptive backpropagation (BP) algorithm based on Lyapunov stability theory for neural networks is developed in this paper. It is shown that the candidate of a Lyapunov function V(k) of the tracking error between the output of a neural network and the desired reference signal is chosen first, and the weights of the neural network are then updated, from the output layer to the input layer, in the sense that DeltaV(k)=V(k)-V(k-1)<0. The output tracking error can then asymptotically converge to zero according to Lyapunov stability theory. Unlike gradient-based BP training algorithms, the new Lyapunov adaptive BP algorithm in this paper is not used for searching the global minimum point along the cost-function surface in the weight space, but it is aimed at constructing an energy surface with a single global minimum point through the adaptive adjustment of the weights as the time goes to infinity. Although a neural network may have bounded input disturbances, the effects of the disturbances can be eliminated, and asymptotic error convergence can be obtained. The new Lyapunov adaptive BP algorithm is then applied to the design of an adaptive filter in the simulation example to show the fast error convergence and strong robustness with respect to large bounded input disturbances  相似文献   

20.
基于MLP神经网络的分组密码算法能量分析研究   总被引:1,自引:0,他引:1  
随着嵌入式密码设备的广泛应用,侧信道分析(side channel analysis,SCA)成为其安全威胁之一。通过对密码算法物理实现过程中的泄露信息进行分析实现密钥恢复,进而对密码算法实现的安全性进行评估。为了精简用于能量分析的多层感知器(multi-layer perceptron,MLP)网络结构,减少模型的训练参数和训练时间,针对基于汉明重量(HW)和基于比特的MLP神经网络的模型进行了研究,输出类别由256分类分别减少为9分类和2分类;通过采集AES密码算法运行过程中的能量曲线对所提出的MLP神经网络进行训练和测试。实验结果表明,该模型在确保预测精度的前提下能减少MLP神经网络84%的训练参数和28%的训练时间,并减少了密钥恢复阶段需要的能量曲线数量,最少只需要一条能量曲线即可完成AES算法完整密钥的恢复。实验验证了模型的有效性,使用该模型可以对分组密码算法实现的安全性进行分析和评估。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号