共查询到20条相似文献,搜索用时 0 毫秒
1.
We study approximation algorithms and hardness of approximation for several versions of the problem of packing Steiner trees.
For packing edge-disjoint Steiner trees of undirected graphs, we show APX-hardness for four terminals. For packing Steiner-node-disjoint
Steiner trees of undirected graphs, we show a logarithmic hardness result, and give an approximation guarantee of O (√ n log n), where n denotes the number of nodes. For the directed setting (packing edge-disjoint Steiner trees of directed graphs), we show a
hardness result of Θ( m
1/3/−ɛ) and give an approximation guarantee of O( m
1/2/+ɛ), where m denotes the number of edges. We have similar results for packing Steiner-node-disjoint priority Steiner trees of undirected
graphs.
Supported by NSERC Grant No. OGP0138432.
Supported by an NSERC postdoctoral fellowship, Department of Combinatorics and Optimization at University of Waterloo, and
a University start-up fund at University of Alberta. 相似文献
2.
Numerous studies have addressed nonlinear functional approximation by multilayer perceptrons (MLPs) and RBF networks as a special case of the more general mapping problem. The performance of both these supervised network models intimately depends on the efficiency of their learning process. This paper presents an unsupervised recurrent neural network, based on the recurrent Mean Field Theory (MFT) network model, that finds a least-squares approximation to an arbitrary L 2 function, given a set of Gaussian radially symmetric basis functions (RBFs). Essential is the reformulation of RBF approximation as a problem of constrained optimisation. A new concept of adiabatic network organisation is introduced. Together with an adaptive mechanism of temperature control this allows the network to build a hierarchical multiresolution approximation with preservation of the global optimisation characteristics. A revised problem mapping results in a position invariant local interconnectivity pattern, which makes the network attractive for electronic implementation. The dynamics and performance of the network are illustrated by numerical simulation. 相似文献
3.
This paper presents a new single-layer neural network which is based on orthogonal functions. This neural network is developed to avoid the problems of traditional feedforward neural networks such as the determination of initial weights and the numbers of layers and processing elements. The desired output accuracy determines the required number of processing elements. Because weights are unique, the training of the neural network converges rapidly. An experiment in approximating typical continuous and discrete functions is given. The results show that the neural network has excellent performance in convergence time and approximation error. 相似文献
6.
This paper describes a neural network for solving flow problems, which are of interest in many areas of application as in fuel, hydro, and electric power scheduling. The neural network consist of two layers: a hidden layer and an output layer. The hidden units correspond to the nodes of the flow graph. The output units represent the branch variables. The network has a linear order of complexity, it is easily programmable, and it is suited for analog very large scale integration (VLSI) realization. The functionality of the proposed network is illustrated by a simulation example concerning the maximal flow problem. 相似文献
7.
Results concerning the approximation rates of neural networks are of particular interest to engineers. The results reported in the literature have “slow approximation rates” O(1/√m), where m is the number of parameters in the neural network. However, many empirical studies report that neural network approximation is quite effective in practice. We give an explanation of this unreasonable effectiveness by proving the existence of approximation schemes that converge at a rate of the order of 1/m 2 by using methods from number theory 相似文献
8.
A model of multiwavelet-based neural networks is proposed. Its universal and L(2) approximation properties, together with its consistency are proved, and the convergence rates associated with these properties are estimated. The structure of this network is similar to that of the wavelet network, except that the orthonormal scaling functions are replaced by orthonormal multiscaling functions. The theoretical analyses show that the multiwavelet network converges more rapidly than the wavelet network, especially for smooth functions. To make a comparison between both networks, experiments are carried out with the Lemarie-Meyer wavelet network, the Daubechies2 wavelet network and the GHM multiwavelet network, and the results support the theoretical analysis well. In addition, the results also illustrate that at the jump discontinuities, the approximation performance of the two networks are about the same. 相似文献
9.
This paper presents a neural-network architecture and an instant learning algorithm that rapidly decides the weights of the designed single-hidden layer neural network. For an n-dimensional N-pattern training set, with a constant bias, a maximum of N-r-1 hidden nodes is required to learn the mapping within a given precision (where r is the rank, usually the dimension, of the input patterns). For off-line training, the proposed network and algorithm is able to achieve "one-shot" training as opposed to most iterative training algorithms in the literature. An online training algorithm is also presented. Similar to most of the backpropagation type of learning algorithms, the given algorithm also interpolates the training data. To eliminate outlier data which may appear in some erroneous training data, a robust weighted least squares method is proposed. The robust weighted least squares learning algorithm can eliminate outlier samples and the algorithm approximates the training data rather than interpolates them. The advantage of the designed network architecture is also mathematically proved. Several experiments show very promising results. 相似文献
10.
Gradient-type Hopfield networks have been widely used in optimization problems solving. The paper presents a novel application by developing a matrix oriented gradient approach to solve a class of linear matrix inequalities (LMIs), which are commonly encountered in the robust control system analysis and design. The solution process is parallel and distributed in neural computation. The proposed networks are proven to be stable in the large. Representative LMIs such as generalized Lyapunov matrix inequalities, simultaneous Lyapunov matrix inequalities, and algebraic Riccati matrix inequalities are considered. Several examples are provided to demonstrate the proposed results. To verify the proposed control scheme in real-time applications, a high-speed digital signal processor is used to emulate the neural-net-based control scheme. 相似文献
11.
To deal with the planarization problem widely used in many applications including routing very-large-scale integration (VLSI) circuits, this paper points out that only when its vertices are arranged in some specific order in a line can a planar graph be embedded on a line without any cross connections or cross edges. Energy function is proposed to meet the need of embedding a graph on a single line and route it correctly. A Hopfield network is designed according to the proposed energy function for such embedding and routing. The advantage of the proposed method is that it not only can detect if a graph is a planar one or not, but also can embed a planar graph or the maximal planar subgraph of a non-planar graph on a single line. In addition, simulated annealing is employed for helping the network to escape from local minima during the running of the Hopfield network. Experiments of the proposed method and its comparison with some existent conventional methods were performed and the results indicate that the proposed method is of great feasibility and effectiveness especially for the planarization problem of large graphs. 相似文献
12.
Some approximation theoretic questions concerning a certain class of neural networks are considered. The networks considered are single input, single output, single hidden layer, feedforward neural networks with continuous sigmoidal activation functions, no input weights but with hidden layer thresholds and output layer weights. Specifically, questions of existence and uniqueness of best approximations on a closed interval of the real line under mean-square and uniform approximation error measures are studied. A by-product of this study is a reparametrization of the class of networks considered in terms of rational functions of a single variable. This rational reparametrization is used to apply the theory of Pade approximation to the class of networks considered. In addition, a question related to the number of local minima arising in gradient algorithms for learning is examined. 相似文献
13.
The authors describe a pulsed network version of the cerebellar model articulation controller (CMAC), popularized by Albus (1981). The network produces output pulses whose times of occurrence are a function of input pulse intervals. Within limits imposed by causality conditions, this function can approximate any bounded measurable function on a compact domain. Simulation results demonstrate the viability of training the network with a least mean square algorithm. 相似文献
14.
This paper deals with the problem of constructing a Hamiltonian cycle of optimal weight, called TSP. We show that TSP is 2/3-differential approximable and cannot be differential approximable greater than 649/650. Next, we demonstrate that, when dealing with edge-costs 1 and 2, the same algorithm idea improves this ratio to 3/4 and we obtain a differential non-approximation threshold equal to 741/742. Remark that the 3/4-differential approximation result has been recently proved by a way more specific to the 1-, 2-case and with another algorithm in the recent conference, Symposium on Fundamentals of Computation Theory, 2001. Based upon these results, we establish new bounds for standard ratio: 5/6 for MaxTSP[a,2a] and 7/8 for MaxTSP[1,2]. We also derive some approximation results on partition graph problems by paths. 相似文献
16.
Neural Processing Letters - 相似文献
17.
The polygonal fuzzy numbers are employed to define a new fuzzy arithmetic. A novel ex-tension principle is also introduced for the increasing function σ:R→R. Thus it is convenient to con-struct a fuzzy neural network model with succinct learning algorithms. Such a system possesses some universal approximation capabilities, that is, the corresponding three layer feedforward fuzzy neural networks can be universal approximators to the continuously increasing fuzzy functions. 相似文献
19.
Neural network literature for function approximation is by now sufficiently rich. In its complete form, the problem entails both parametric (i.e., weights determination) and structural learning (i.e., structure selection). The majority of works deal with parametric uncertainty assuming knowledge of the appropriate neural structure. In this paper we present an algorithmic approach to determine the structure of High Order Neural Networks (HONNs), to solve function approximation problems. The method is based on a Genetic Algorithm (GA) and is equipped with a stable update law to guarantee parametric learning. Simulation results on an illustrative example highlight the performance and give some insight of the proposed approach. 相似文献
20.
The effectiveness of swarm intelligence has been proven to be at the heart of various optimization problems. In this study, a recently developed nature-inspired algorithm, specifically the firefly algorithm (FA), is integrated in the learning strategy of wavelet neural networks (WNNs). The FA, which systematically optimizes the initial location of the translation parameters for WNNs, has reduced the number of hidden nodes while simultaneously improved the generalization capability of WNNs significantly. The applicability of the proposed model was demonstrated through empirical simulations for function approximation study, with both synthetic and real-world data. Performance assessment demonstrated its enhancement over the K-means clustering and random initialization approaches, as well as to the other neural network models reported in the literature, whereby a noteworthy decrease in the approximation error was observed. 相似文献
|