首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Given a graph GG, an integer kk, and a demand set D={(s1,t1),…,(sl,tl)}D={(s1,t1),,(sl,tl)}, the kk-Steiner Forest problem finds a forest in graph GG to connect at least kk demands in DD such that the cost of the forest is minimized. This problem was proposed by Hajiaghayi and Jain in SODA’06. Thereafter, using a Lagrangian relaxation technique, Segev et al. gave the first approximation algorithm to this problem in ESA’06, with performance ratio O(n2/3logl)O(n2/3logl). We give a simpler and faster approximation algorithm to this problem with performance ratio O(n2/3logk)O(n2/3logk) via greedy approach, improving the previously best known ratio in the literature.  相似文献   

2.
Vulnerability to sudden service disruptions due to deliberate sabotage and terrorist attacks is one of the major threats of today. In this paper, we present a bilevel formulation of the r-interdiction median problem with fortification (RIMF). RIMF identifies the most cost-effective way of allocating protective resources among the facilities of an existing but vulnerable system so that the impact of the most disruptive attack to r unprotected facilities is minimized. The model is based upon the classical p-median location model and assumes that the efficiency of the system is measured in terms of accessibility or service provision costs. In the bilevel formulation, the top level problem involves the decisions about which facilities to fortify in order to minimize the worst-case efficiency reduction due to the loss of unprotected facilities. Worst-case scenario losses are modeled in the lower-level interdiction problem. We solve the bilevel problem through an implicit enumeration (IE) algorithm, which relies on the efficient solution of the lower-level interdiction problem. Extensive computational results are reported, including comparisons with earlier results obtained by a single-level approach to the problem.  相似文献   

3.
It is now a well-established fact that search algorithms can exhibit heavy-tailed behavior. However, the reasons behind this fact are not well understood. We provide a generative search tree model whose distribution of the number of nodes visited during search is formally heavy-tailed. Our model allows us to generate search trees with any degree of heavy-tailedness. We also show how the different regimes observed for the runtime distributions of backtrack search methods across different constrainedness regions of random CSP models can be captured by a mixture of the so-called stable distributions.  相似文献   

4.
In this paper, we investigate the fault-tolerant capabilities of the k-ary n-cubes for even integer k with respect to the hamiltonian and hamiltonian-connected properties. The k-ary n-cube is a bipartite graph if and only if k is an even integer. Let F   be a faulty set with nodes and/or links, and let k?3k?3 be an odd integer. When |F|?2n-2|F|?2n-2, we show that there exists a hamiltonian cycle in a wounded k-ary n  -cube. In addition, when |F|?2n-3|F|?2n-3, we prove that, for two arbitrary nodes, there exists a hamiltonian path connecting these two nodes in a wounded k-ary n-cube. Since the k-ary n  -cube is regular of degree 2n2n, the degrees of fault-tolerance 2n-32n-3 and 2n-22n-2 respectively, are optimal in the worst case.  相似文献   

5.
We describe three computer searches (in PARI/GP, Maple, and Mathematica, respectively) which led to the discovery of a number of identities of Rogers–Ramanujan type and identities of false theta functions.  相似文献   

6.
We propose exact hybrid methods based on integer linear programming (ILP) and constraint programming (CP) for an integrated employee timetabling and job-shop scheduling problem. Each method we investigate uses a CP formulation associated with an LP relaxation. Under a CP framework, the LP relaxation is integrated into a global constraint using in addition reduced cost-based filtering techniques. We propose two CP formulations of the problem yielding two different LP relaxations. The first formulation is based on a direct representation of the problem. The second formulation is based on a decomposition in intervals of the possible operation starting times. We show the theoretical interest of the decomposition-based representation compared to the direct representation through the analysis of dominant schedules. Computational experiments on a set of randomly generated instances confirm the superiority of the decomposition-based representation. In both cases, the hybrid methods outperform pure CP for employee cost minimization while it is not the case for makespan minimization. The experiments also investigate the interest of the proposed integrated method compared to a sequential approach and show its potential for multiobjective optimization.  相似文献   

7.
This paper deals with the multicriteria 0-1 knapsack problem (KP) with kk-min objectives (MkMIN-KP) in which the first objective is of classical sum type and the remaining objectives are kk-min objective functions. The kk-min objectives are ordinal objectives, aiming at the maximization of the kk th smallest objective coefficient in any feasible knapsack solution with at least kk items in the knapsack. We develop efficient algorithms for the determination of the complete nondominated set of MkMIN-KP.  相似文献   

8.
The local bb-function bf,p(s)bf,p(s) of an nn-variate polynomial f∈C[x]fC[x] (x=(x1,…,xn)x=(x1,,xn)) at a point p∈CnpCn is constant on each stratum of a stratification of CnCn. We propose a new method for computing such a stratification and bf,p(s)bf,p(s) on each stratum. In the existing method proposed in Oaku (1997b), a primary ideal decomposition of an ideal in C[x,s]C[x,s] is needed and our experiment shows that the primary decomposition can be a bottleneck for computing the stratification. In our new method, the computation can be done by just computing ideal quotients and examining inclusions of algebraic sets. The precise form of a stratum can be obtained by computing the decomposition of the radicals of the ideals in C[x]C[x] defining the stratum. We also introduce various techniques for improving the practical efficiency of the implementation and we show results of computations for some examples.  相似文献   

9.
We give simple, self-contained proofs of the basic hardness results for the classes W[t]W[t] of the weft hierarchy. We extend these proofs to higher levels of the hierarchy and illuminate the distinctions among its classes. The anti-monotone collapse at W[1,s]W[1,s] and the normalization of weft-tt formulas arise as by-products of the proofs.  相似文献   

10.
Recently, Universum data that does not belong to any class of the training data, has been applied for training better classifiers. In this paper, we address a novel boosting algorithm called UUAdaBoost that can improve the classification performance of AdaBoost with Universum data. UUAdaBoost chooses a function by minimizing the loss for labeled data and Universum data. The cost function is minimized by a greedy, stagewise, functional gradient procedure. Each training stage of UUAdaBoost is fast and efficient. The standard AdaBoost weights labeled samples during training iterations while UUAdaBoost gives an explicit weighting scheme for Universum samples as well. In addition, this paper describes the practical conditions for the effectiveness of Universum learning. These conditions are based on the analysis of the distribution of ensemble predictions over training samples. Experiments on handwritten digits classification and gender classification problems are presented. As exhibited by our experimental results, the proposed method can obtain superior performances over the standard AdaBoost by selecting proper Universum data.  相似文献   

11.
In the context of the ππ-calculus, open bisimulation is prominent and popular due to its congruence properties and its easy implementability. Motivated by the attempt to generalise it to the spi-calculus, we offer a new, more refined definition and show how far it coincides with the original one.  相似文献   

12.
This paper considers the recursive identification of errors-in-variables (EIV) Wiener systems composed of a linear dynamic system followed by a static nonlinearity. Both the system input and output are observed with additive noises being ARMA processes with unknown coefficients. By a stochastic approximation incorporated with the deconvolution kernel functions, the recursive algorithms are proposed for estimating the coefficients of the linear subsystem and for the values of the nonlinear function. All the estimates are proved to converge to the true values with probability one. A simulation example is given to verify the theoretical analysis.  相似文献   

13.
14.
15.
Evaluating the energy of a protein molecule is one of the most computationally costly operations in many protein structure modeling applications. In this paper, we present an efficient implementation of knowledge-based energy functions by taking advantage of the recent Graphics Processing Unit (GPU) architectures. We use DFIRE, a knowledge-based all-atom potential, as an example to demonstrate our GPU implementations on the latest NVIDIA Fermi architecture. A load balancing workload distribution scheme is designed to assign computations of pair-wise atom interactions to threads to achieve perfect or near-perfect load balancing in the symmetric NN-body problem in DFIRE. Reorganizing atoms in the protein also improves the cache efficiency in Fermi GPU architecture, which is particularly effective for small proteins. Our DFIRE implementation on GPU (GPU-DFIRE) has exhibited a speedup of up to ∼150 on NVIDIA Quadro FX3800M and ∼250 on NVIDIA Tesla M2050 compared to the serial DFIRE implementation on CPU. Furthermore, we show that protein structure modeling applications, including a Monte Carlo sampling program and a local optimization program, can benefit from GPU-DFIRE with little programming modification but significant computational performance improvement.  相似文献   

16.
17.
A canal surface is an envelope of a one-parameter family of spheres. In this paper we present an efficient algorithm for computing the implicit equation of a canal surface generated by a rational family of spheres. By using Laguerre and Lie geometries, we relate the equation of the canal surface to the equation of a dual variety of a certain curve in 5-dimensional projective space. We define the μμ-basis for arbitrary dimension and give a simple algorithm for its computation. This is then applied to the dual variety, which allows us to deduce the implicit equations of the dual variety, the canal surface and any offset to the canal surface.  相似文献   

18.
One of the early results concerning the asynchronous ππ-calculus which significantly contributed to its popularity is the capability of encoding the output prefix of the (choiceless) ππ-calculus in a natural and elegant way. Encodings of this kind were proposed by Honda and Tokoro, by Nestmann and (independently) by Boudol. We investigate whether the above encodings preserve De Nicola and Hennessy’s testing semantics. In this sense, it turns out that, under some general conditions, no encoding of the output prefix is able to preserve the must testing. This negative result is due to (a) the non-atomicity of the sequences of steps which are necessary in the asynchronous ππ-calculus to mimic synchronous communication, and (b) testing semantics’ sensitivity to divergence.  相似文献   

19.
Recent observations of hundreds of hydrogen-rich magnetic white dwarf stars with magnetic fields up to 105  T (103  MG) have called for more comprehensive and accurate databases for wavelengths and oscillator strengths of the H atom in strong magnetic fields for all states evolving from the field-free levels with principal quantum numbers n≤10n10. We present a code to calculate the energy eigenvalues and wave functions of such states which is capable of covering the entire regime of field strengths B=0B=0 T to B∼109B109 T. We achieve this high flexibility by using a two-dimensional finite element expansion of the wave functions in terms of BB-splines in the directions parallel and perpendicular to the magnetic field, instead of using asymptotically valid basis expansions in terms of spherical harmonics or Landau orbitals. We have paid special attention to the automation of the program such that the data points for the magnetic field strengths at which the energy of a given state are calculated can be selected automatically. Furthermore, an elaborate method for varying the basis parameters is applied to ensure that the results reach a pre-selected precision, which also can be adjusted freely. Energies and wave functions are stored in a convenient format for further analysis, e.g. for the calculation of transition energies and oscillator strengths. The code has been tested to work for 300 states with an accuracy of better than 10−6 Rydberg across several symmetry subspaces over the entire regime of magnetic field strengths.  相似文献   

20.
Buchberger’s Gröbner basis theory plays a fundamental role in symbolic computation. The resulting algorithms essentially carry out several SS-polynomial reductions. In his Ph.D. thesis and later publication Buchberger showed that sometimes one can skip SS-polynomial reductions if the leading terms of polynomials satisfy certain criteria. A question naturally arises: Are Buchberger’s criteria also necessary   for skipping SS-polynomial reductions? In this paper, after making the question more precise (in terms of a chain condition), we show the answer to be “almost, but not quite”: necessary when there are four or more polynomials, but not necessary when there are exactly three polynomials. For that case, we found an extension to Buchberger’s criteria that is necessary as well as sufficient.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号