首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The selection of weight accuracies for Madalines   总被引:4,自引:0,他引:4  
The sensitivity of the outputs of a neural network to perturbations in its weights is an important consideration in both the design of hardware realizations and in the development of training algorithms for neural networks. In designing dense, high-speed realizations of neural networks, understanding the consequences of using simple neurons with significant weight errors is important. Similarly, in developing training algorithms, it is important to understand the effects of small weight changes to determine the required precision of the weight updates at each iteration. In this paper, an analysis of the sensitivity of feedforward neural networks (Madalines) to weight errors is considered. We focus our attention on Madalines composed of sigmoidal, threshold, and linear units. Using a stochastic model for weight errors, we derive simple analytical expressions for the variance of the output error of a Madaline. These analytical expressions agree closely with simulation results. In addition, we develop a technique for selecting the appropriate accuracy of the weights in a neural network realization. Using this technique, we compare the required weight precision for threshold versus sigmoidal Madalines. We show that for a given desired variance of the output error, the weights of a threshold Madaline must be more accurate.  相似文献   

2.
An important issue in the design and implementation of a neural network is the sensitivity of its output to input and weight perturbations. In this paper, we discuss the sensitivity of the most popular and general feedforward neural networks-multilayer perceptron (MLP). The sensitivity is defined as the mathematical expectation of the output errors of the MLP due to input and weight perturbations with respect to all input and weight values in a given continuous interval. The sensitivity for a single neuron is discussed first and an analytical expression that is a function of the absolute values of input and weight perturbations is approximately derived. Then an algorithm is given to compute the sensitivity for the entire MLP. As intuitively expected, the sensitivity increases with input and weight perturbations, but the increase has an upper bound that is determined by the structural configuration of the MLP, namely the number of neurons per layer and the number of layers. There exists an optimal value for the number of neurons in a layer, which yields the highest sensitivity value. The effect caused by the number of layers is quite unexpected. The sensitivity of a neural network may decrease at first and then almost keeps constant while the number increases.  相似文献   

3.
In a neural network, many different sets of connection weights can approximately realize an input-output mapping. The sensitivity of the neural network varies depending on the set of weights. For the selection of weights with lower sensitivity or for estimating output perturbations in the implementation, it is important to measure the sensitivity for the weights. A sensitivity depending on the weight set in a single-output multilayer perceptron (MLP) with differentiable activation functions is proposed. Formulas are derived to compute the sensitivity arising from additive/multiplicative weight perturbations or input perturbations for a specific input pattern. The concept of sensitivity is extended so that it can be applied to any input patterns. A few sensitivity measures for the multiple output MLP are suggested. For the verification of the validity of the proposed sensitivities, computer simulations have been performed, resulting in good agreement between theoretical and simulation outcomes for small weight perturbations.  相似文献   

4.
The sensitivity of a neural network's output to its input perturbation is an important issue with both theoretical and practical values. In this article, we propose an approach to quantify the sensitivity of the most popular and general feedforward network: multilayer perceptron (MLP). The sensitivity measure is defined as the mathematical expectation of output deviation due to expected input deviation with respect to overall input patterns in a continuous interval. Based on the structural characteristics of the MLP, a bottom-up approach is adopted. A single neuron is considered first, and algorithms with approximately derived analytical expressions that are functions of expected input deviation are given for the computation of its sensitivity. Then another algorithm is given to compute the sensitivity of the entire MLP network. Computer simulations are used to verify the derived theoretical formulas. The agreement between theoretical and experimental results is quite good. The sensitivity measure can be used to evaluate the MLP's performance.  相似文献   

5.
An important consideration when applying neural networks to pattern recognition is the sensitivity to weight perturbation or to input errors. In this paper, we analyze the sensitivity of single hidden-layer networks with threshold functions. In a case of weight perturbation or input errors, the probability of inversion error for an output neuron is derived as a function of the trained weights, the input pattern, and the variance of weight perturbation or the bit error probability of the input pattern. The derived results are verified with a simulation of the Madaline recognizing handwritten digits. The result shows that the sensitivity of trained networks is far different from that of networks with random weights.  相似文献   

6.
Computation of Adalines' sensitivity to weight perturbation   总被引:1,自引:0,他引:1  
In this paper, the sensitivity of Adalines to weight perturbation is discussed. According to the discrete feature of Adalines' input and output, the sensitivity is defined as the probability of an Adaline's erroneous outputs due to weight perturbation with respect to all possible inputs. By means of hypercube model and analytical geometry method, a heuristic algorithm is given to accurately compute the sensitivity. The accuracy of the algorithm is verified by computer simulations.  相似文献   

7.
Sensitivity analysis on a neural network is mainly investigated after the network has been designed and trained. Very few have considered this as a critical issue prior to network design. Piche's statistical method (1992, 1995) is useful for multilayer perceptron (MLP) design, but too severe limitations are imposed on both input and weight perturbations. This paper attempts to generalize Piche's method by deriving an universal expression of MLP sensitivity for antisymmetric squashing activation functions, without any restriction on input and output perturbations. Experimental results which are based on, a three-layer MLP with 30 nodes per layer agree closely with our theoretical investigations. The effects of the network design parameters such as the number of layers, the number of neurons per layer, and the chosen activation function are analyzed, and they provide useful information for network design decision-making. Based on the sensitivity analysis of MLP, we present a network design method for a given application to determine the network structure and estimate the permitted weight range for network training.  相似文献   

8.
Multidimensional density shaping by sigmoids   总被引:1,自引:0,他引:1  
An estimate of the probability density function of a random vector is obtained by maximizing the output entropy of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's optimization method, applied to the estimated density, yields a recursive estimator for a random variable or a random sequence. A constrained connectivity structure yields a linear estimator, which is particularly suitable for "real time" prediction. A Gaussian nonlinearity yields a closed-form solution for the network's parameters, which may also be used for initializing the optimization algorithm when other nonlinearities are employed. A triangular connectivity between the neurons and the input, which is naturally suggested by the statistical setting, reduces the number of parameters. Applications to classification and forecasting problems are demonstrated.  相似文献   

9.
This article defines input perturbations so that an algorithm designed under certain restrictions on the input can execute on arbitrary instances. A syntactic definition of perturbations is proposed and certain properties are specified under which an algorithm executed on perturbed input produces an output from which the exact answer can be recovered. A general framework is adopted for linear perturbations, which are efficient from the point of view of worst-case complexity. The deterministic scheme of Emiris and Canny [1] was the first efficient scheme and is extended in a consistent manner, most notably to the InSphere primitive. We introduce a variant scheme, applicable to a restricted class of algorithms, which is almost optimal in terms of algebraic as well as bit complexity. Neither scheme requires any symbolic computation and both are simple to use as illustrated by our implementation of a convex hull algorithm in arbitrary dimension. Empirical results and a concrete application in robotics are presented. Received June 9, 1994; revised March 22, 1995, March 5, 1996, and March 21, 1996.  相似文献   

10.
针对Volterra非线性滤波算法计算复杂度呈幂级数增加的问题,提出了一种α稳定分布噪声下的基于集员滤波的二阶Volterra自适应滤波新算法。由于集员滤波的目标函数考虑了所有输入和期望输出的信号对,通过误差幅值的p次方的门限判决,更新Volterra滤波器的权向量,不仅有效降低了算法复杂度,而且提高了自适应算法对输入信号相关性的鲁棒性;并推导给出了权向量的更新公式。仿真结果表明,该算法计算复杂度低、收敛速度快,对噪声及输入信号相关性有较强的鲁棒性。  相似文献   

11.
In this paper, a new kind of multivariate global sensitivity index based on energy distance is proposed. The covariance decomposition based index has been widely used for multivariate global sensitivity analysis. However, it just considers the variance of multivariate model output and ignores the correlation between different outputs. The proposed index considers the whole probability distribution of dynamic output based on characteristic function and contains more information of uncertainty than the covariance decomposition based index. The multivariate probability integral transformation based index is an extension of the popularly used moment-independent sensitivity analysis index. Although it considers the whole probability distribution of dynamic output, it is difficult to estimate the joint cumulative distribution function of dynamic output. The proposed sensitivity index can be easily estimated, especially for models with high dimensional outputs. Compared to the classic sensitivity indices, the proposed sensitivity index can be easily used for dynamic systems and obtain reasonable results. An efficient method based on the idea of the given-data method is used to estimate the proposed sensitivity index with only one set of input-output samples. The numerical and engineering examples are employed to compare the proposed index and the covariance decomposition based index. The results show that the input variables may have different effect on the whole probability distribution and variance of dynamic model output since the proposed index and the covariance decomposition based index measure the effects of input variables on the whole distribution and variance of model output separately.  相似文献   

12.
A Growing and Pruning Method for Radial Basis Function Networks   总被引:1,自引:0,他引:1  
A recently published generalized growing and pruning (GGAP) training algorithm for radial basis function (RBF) neural networks is studied and modified. GGAP is a resource-allocating network (RAN) algorithm, which means that a created network unit that consistently makes little contribution to the network's performance can be removed during the training. GGAP states a formula for computing the significance of the network units, which requires a d-fold numerical integration for arbitrary probability density function $p({bf x})$ of the input data ${bf x} ({bf x} in {{bf R}}^{d})$ . In this work, the GGAP formula is approximated using a Gaussian mixture model (GMM) for $p({bf x})$ and an analytical solution of the approximated unit significance is derived. This makes it possible to employ the modified GGAP for input data having complex and high-dimensional $p({bf x})$, which was not possible in the original GGAP. The results of an extensive experimental study show that the modified algorithm outperforms the original GGAP achieving both a lower prediction error and reduced complexity of the trained network.   相似文献   

13.
Properties of Sensitivity Analysis of Bayesian Belief Networks   总被引:1,自引:0,他引:1  
The assessments for the various conditional probabilities of a Bayesian belief network inevitably are inaccurate, influencing the reliability of its output. By subjecting the network to a sensitivity analysis with respect to its conditional probabilities, the reliability of its output can be investigated. Unfortunately, straightforward sensitivity analysis of a belief network is highly time-consuming. In this paper, we show that by qualitative considerations several analyses can be identified as being uninformative as the conditional probabilities under study cannot affect the output. In addition, we show that the analyses that are informative comply with simple mathematical functions. More specifically, we show that a belief network's output can be expressed as a quotient of two functions that are linear in a conditional probability under study. These properties allow for considerably reducing the computational burden of sensitivity analysis of Bayesian belief networks.  相似文献   

14.
主动队列管理中RQC控制器的设计   总被引:1,自引:0,他引:1  
基于网络中输入输出速率和队列长度均可帮助决定更精确丢弃概率的思想,提出了根据输入输出速率和队列长度决定包的丢弃标注概率的AQM算法,即RQC算法.通过仿真将该算法与RED和PI算法进行比较,说明了RQC控制算法的优点.  相似文献   

15.
16.
With the increase of internet protocol (IP) packets the performance of routers became an important issue in internet/working. In this paper we examine the matching algorithm in gigabit router which has input queue with virtual output queueing. Dynamic queue scheduling is also proposed to reduce the packet delay and packet loss probability. Port partitioning is employed to reduce the computational burden of the scheduler in a switch which matches the input and output ports for fast packet switching. Each port is divided into two groups such that the matching algorithm is implemented within each pair of groups in parallel. The matching is performed by exchanging the pair of groups at every time slot. Two algorithms, maximal weight matching by port partitioning (MPP) and modified maximal weight matching by port partitioning (MMPP) are presented. In dynamic queue scheduling, a popup decision rule for each delay critical packet is made to reduce both the delay of the delay critical packet and the loss probability of loss critical packet. Computational results show that MMPP has the lowest delay and requires the least buffer size. The throughput is illustrated to be linear to the packet arrival rate, which can be achieved under highly efficient matching algorithm. The dynamic queue scheduling is illustrated to be highly effective when the occupancy of the input buffer is relatively high.Scope and purposeTo cope with the increasing internet traffic, it is necessary to improve the performance of routers. To accelerate the switching from input ports to output in the router partitioning of ports and dynamic queueing are proposed. Input and output ports are partitioned into two groups A/B and a/b, respectively. The matching for the packet switching is performed between group pairs (A, a) and (B, b) in parallel at one time slot and (A, b) and (B, a) at the next time slot. Dynamic queueing is proposed at each input port to reduce the packet delay and packet loss probability by employing the popup decision rule and applying it to each delay critical packet.The partitioning of ports is illustrated to be highly effective in view of delay, required buffer size and throughput. The dynamic queueing also demonstrates good performance when the traffic volume is high.  相似文献   

17.
In this paper we analyze how supervised learning occurs in ecological neural networks, i.e. networks that interact with an autonomous external environment and, therefore, at least partially determine with their behavior their own input. Using an evolutionary method for selecting good teaching inputs we surprisingly find that to obtain a desired outputX it is better to use a teaching input different fromX. To explain this fact we claim that teaching inputs in ecological networks have two different effects: (a) to reduce the discrepancy between the actual output of the network and the teaching input, (b) to modify the network's behavior and, as a consequence, the network's learning experiences. Evolved teaching inputs appear to represent a compromise between these two needs. We finally show that evolved teaching inputs that are allowed to change during the learning process function differently at different stages of learning, first giving more weight to (b) and, later on, to (a).  相似文献   

18.
针对输入输出观测数据均含有噪声的系统辨识问题,提出了一种鲁棒的总体最小二乘自适应辨识算法.该算法在对总体最小二乘问题与向量的瑞利商及其性质研究的基础上,以被辨识系统的增广权向量的瑞利商(RQ)作为损失函数,利用梯度最陡下降原理导出权向量的自适应迭代算法,并利用随机离散学习规律对权向量模的分析修正了算法梯度,提高了算法的噪声鲁棒性,构成了一种噪声鲁棒的总体最小二乘自适应辨识算法.文中研究了该算法的收敛性能.仿真实验结果表明该算法的鲁棒抗噪性能和稳态收敛精度明显高于其它同类方法,而且可使用较大的学习因子,在较高的噪声环境下仍然保持良好的收敛性.  相似文献   

19.
Recursive least squares (RLS)-based algorithms are a class of fast online training algorithms for feedforward multilayered neural networks (FMNNs). Though the standard RLS algorithm has an implicit weight decay term in its energy function, the weight decay effect decreases linearly as the number of learning epochs increases, thus rendering a diminishing weight decay effect as training progresses. In this paper, we derive two modified RLS algorithms to tackle this problem. In the first algorithm, namely, the true weight decay RLS (TWDRLS) algorithm, we consider a modified energy function whereby the weight decay effect remains constant, irrespective of the number of learning epochs. The second version, the input perturbation RLS (IPRLS) algorithm, is derived by requiring robustness in its prediction performance to input perturbations. Simulation results show that both algorithms improve the generalization capability of the trained network.  相似文献   

20.
Random perturbation models for boundary extraction sequence   总被引:2,自引:0,他引:2  
Computer vision algorithms are composed of different sub-algorithms often applied in sequence. Determination of the performance of a total computer vision algorithm is possible if the performance of each of the sub-algorithm constituents is given. The performance characterization of an algorithm has to do with establishing the correspondence between the random variations and imperfections in the output data and the random variations and imperfections in the input data. In this paper we illustrate how random perturbation models can be set up for a vision algorithm sequence involving edge finding, edge linking, and gap filling. By starting with an appropriate noise model for the input data we derive random perturbation models for the output data at each stage of our example sequence. By utilizing the perturbation model for edge detector output derived, we illustrate how pixel noise can be successively propagated to derive an error model for the boundary extraction output. It is shown that the fragmentation of an ideal boundary can be described by an alternating renewal process and that the parameters of the renewal process are related to the probability of correct detection and grouping at the edge linking step. It is also shown that the characteristics of random segments generated due to gray-level noise are functions of the probability of false alarm of the edge detector. Theoretical results are validated through systematic experiments.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号