首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 135 毫秒
1.
Given a network with capacities and transit times on the arcs, the quickest flow problem asks for a "flow over time" that satisfies given demands within minimal time. In the setting of flows over time, flow on arcs may vary over time and the transit time of an arc is the time it takes for flow to travel through this arc. In most real-world applications (such as, e.g., road traffic, communication networks, production systems, etc.), transit times are not fixed but depend on the current flow situation in the network. We consider the model where the transit time of an arc is given as a non-decreasing function of the rate of inflow into the arc. We prove that the quickest s-t-flow problem is NP-hard in this setting and give various approximation results, including a fully polynomial time approximation scheme (FPTAS) for the quickest multicommodity flow problem with bounded cost.  相似文献   

2.
From the point of view of information processing the immune system is a highly parallel and distributed intelligent system which has learning, memory, and associative retrieval capabilities. In this paper we present two immunity-based hybrid learning approaches for function approximation (or regression) problems that involve adjusting the structure and parameters of spatially localized models (e.g., radial basis function networks). The number and centers of the receptive fields for local models are specified by immunity-based structure adaptation algorithms, while the parameters of the local models, which enter in a linear fashion, are tuned separately using a least-squares method. The effectiveness of the procedure is demonstrated through a nonlinear function approximation problem and a nonlinear dynamical system modeling problem.  相似文献   

3.
Supervised learning of perceptron networks is investigated as an optimization problem. It is shown that both the theoretical and the empirical error functionals achieve minima over sets of functions computable by networks with a given number n of perceptrons. Upper bounds on rates of convergence of these minima with n increasing are derived. The bounds depend on a certain regularity of training data expressed in terms of variational norms of functions interpolating the data (in the case of the empirical error) and the regression function (in the case of the expected error). Dependence of this type of regularity on dimensionality and on magnitudes of partial derivatives is investigated. Conditions on the data, which guarantee that a good approximation of global minima of error functionals can be achieved using networks with a limited complexity, are derived. The conditions are in terms of oscillatory behavior of the data measured by the product of a function of the number of variables d, which is decreasing exponentially fast, and the maximum of the magnitudes of the squares of the L(1)-norms of the iterated partial derivatives of the order d of the regression function or some function, which interpolates the sample of the data. The results are illustrated by examples of data with small and high regularity constructed using Boolean functions and the gaussian function.  相似文献   

4.
Convex incremental extreme learning machine   总被引:8,自引:2,他引:6  
Guang-Bin  Lei   《Neurocomputing》2007,70(16-18):3056
Unlike the conventional neural network theories and implementations, Huang et al. [Universal approximation using incremental constructive feedforward networks with random hidden nodes, IEEE Transactions on Neural Networks 17(4) (2006) 879–892] have recently proposed a new theory to show that single-hidden-layer feedforward networks (SLFNs) with randomly generated additive or radial basis function (RBF) hidden nodes (according to any continuous sampling distribution) can work as universal approximators and the resulting incremental extreme learning machine (I-ELM) outperforms many popular learning algorithms. I-ELM randomly generates the hidden nodes and analytically calculates the output weights of SLFNs, however, I-ELM does not recalculate the output weights of all the existing nodes when a new node is added. This paper shows that while retaining the same simplicity, the convergence rate of I-ELM can be further improved by recalculating the output weights of the existing nodes based on a convex optimization method when a new hidden node is randomly added. Furthermore, we show that given a type of piecewise continuous computational hidden nodes (possibly not neural alike nodes), if SLFNs can work as universal approximators with adjustable hidden node parameters, from a function approximation point of view the hidden node parameters of such “generalized” SLFNs (including sigmoid networks, RBF networks, trigonometric networks, threshold networks, fuzzy inference systems, fully complex neural networks, high-order networks, ridge polynomial networks, wavelet networks, etc.) can actually be randomly generated according to any continuous sampling distribution. In theory, the parameters of these SLFNs can be analytically determined by ELM instead of being tuned.  相似文献   

5.
Optimized approximation algorithm in neural networks without overfitting.   总被引:2,自引:0,他引:2  
In this paper, an optimized approximation algorithm (OAA) is proposed to address the overfitting problem in function approximation using neural networks (NNs). The optimized approximation algorithm avoids overfitting by means of a novel and effective stopping criterion based on the estimation of the signal-to-noise-ratio figure (SNRF). Using SNRF, which checks the goodness-of-fit in the approximation, overfitting can be automatically detected from the training error only without use of a separate validation set. The algorithm has been applied to problems of optimizing the number of hidden neurons in a multilayer perceptron (MLP) and optimizing the number of learning epochs in MLP's backpropagation training using both synthetic and benchmark data sets. The OAA algorithm can also be utilized in the optimization of other parameters of NNs. In addition, it can be applied to the problem of function approximation using any kind of basis functions, or to the problem of learning model selection when overfitting needs to be considered.  相似文献   

6.
Florin   《Performance Evaluation》2003,51(2-4):171-190
Large deviations papers like that of Ignatyuk et al. [Russ. Math. Surv. 49 (1994) 41–99] have shown that asymptotically, the stationary distribution of homogeneous regulated networks is of the form
with the coefficient being different in various “boundary influence domains” and also depending on some of these domains on n. In this paper, we focus on the case of constant exponents and on a subclass of networks we call “strongly skip-free” (which includes all Jackson and all two-dimensional skip-free networks). We conjecture that an asymptotic exponent is constant iff it corresponds to a large deviations escape path which progresses gradually (from the origin to the interior) through boundary facets whose dimension always increases by one. Solving the corresponding large deviations problem for our subclass of networks leads to a family of “local large deviation systems” (LLDSs) (for the constant exponents), which are expressed entirely in terms of the cumulant generating function of the network. In this paper, we show that at least for “strongly skip-free” Markovian networks with independent transition processes, the LLDS is closely related to some “local boundary equilibrium systems” (LESs) obtained by retaining from the equilibrium equations only those valid in neighborhoods of the boundary.

Since asymptotic results require typically only that the cumulant generating function is well-defined over an appropriate domain, it is natural to conjecture that these LLDSs will provide the asymptotic constant exponents regardless of any distributional assumptions made on the network.

Finally, we outline a practical recipe for combining the local approximations to produce a global large deviations approximation , with the coefficients Kj determined numerically.  相似文献   


7.
Real-world financial data is often nonlinear, comprises high-frequency multipolynomial components, and is discontinuous (piecewise continuous). Not surprisingly, it is hard to model such data. Classical neural networks are unable to automatically determine the optimum model and appropriate order for financial data approximation. We address this problem by developing neuron-adaptive higher order neural-network (NAHONN) models. After introducing one-dimensional (1-D), two-dimensional (2-D), and n-dimensional NAHONN models, we present an appropriate learning algorithm. Network convergence and the universal approximation capability of NAHONNs are also established. NAHONN Group models (NAHONGs) are also introduced. Both NAHONNs and NAHONGs are shown to be "open box" and as such are more acceptable to financial experts than classical (closed box) neural networks. These models are further shown to be capable of automatically finding not only the optimum model, but also the appropriate order for specific financial data.  相似文献   

8.
We consider the problem of learning the dependence of one random variable on another, from a finite string of independently identically distributed (i.i.d.) copies of the pair. The problem is first converted to that of learning a function of the latter random variable and an independent random variable uniformly distributed on the unit interval. However, this cannot be achieved using the usual function learning techniques because the samples of the uniformly distributed random variables are not available. We propose a novel loss function, the minimizer of which results in an approximation to the needed function. Through successive approximation results (suggested by the proposed loss function), a suitable class of functions represented by combination feedforward neural networks is selected as the class to learn from. These results are also extended for countable as well as continuous state-space Markov chains. The effectiveness of the proposed method is indicated through simulation studies.  相似文献   

9.
The problem of counting the number of solutions to a constraint network (CN) (also called constraint satisfaction problems, CSPs) is rephrased in terms of probability updating in Bayes networks. Approximating the probabilities in Bayes networks is a problem which has been studied for a while, and may well provide a good approximation to counting the number of solutions. We use a simple approximation based on independence, and show that it is correct for tree‐structured CNs. For other CNs, it is less optimistic than a spanning‐tree approximation suggested in prior work. Experiments show that the Bayes nets estimation is more accurate on the average, compared to competing methods (the spanning‐tree, as well as a scheme based on a product of all compatible pairs of values). We present empirical evidence that our approximation may also be a useful (value ordering) search heuristic for finding a single solution to a constraint satisfaction problem. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

10.
Neural networks are successfully used to determine small particle properties from knowledge of the scattered light – an inverse light scattering problem. This type of problem is inherently difficult to solve as it is represented by a highly ill-posed function mapping. This paper presents a technique that solves the inverse light scattering problem for spheres using Radial Basis Function (RBF) neural networks. A two-stage network architecture is arranged to enhance network approximation capability. In addition, a new approach to computing basis function parameters with respect to the inverse scattering problem is demonstrated. The technique is evaluated for noise-free data through simulations, in which a minimum 99.06% approximation accuracy is achieved. A comparison is made between the least square and the orthogonal least square training methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号