共查询到20条相似文献,搜索用时 15 毫秒
1.
Vulnerability to sudden service disruptions due to deliberate sabotage and terrorist attacks is one of the major threats of today. In this paper, we present a bilevel formulation of the r-interdiction median problem with fortification (RIMF). RIMF identifies the most cost-effective way of allocating protective resources among the facilities of an existing but vulnerable system so that the impact of the most disruptive attack to r unprotected facilities is minimized. The model is based upon the classical p-median location model and assumes that the efficiency of the system is measured in terms of accessibility or service provision costs. In the bilevel formulation, the top level problem involves the decisions about which facilities to fortify in order to minimize the worst-case efficiency reduction due to the loss of unprotected facilities. Worst-case scenario losses are modeled in the lower-level interdiction problem. We solve the bilevel problem through an implicit enumeration (IE) algorithm, which relies on the efficient solution of the lower-level interdiction problem. Extensive computational results are reported, including comparisons with earlier results obtained by a single-level approach to the problem. 相似文献
2.
A technique suitable for the modelling of large deforming biological tissues with a nearly periodic microstructure is presented in this work. The proposed approach takes into account the heterogeneous material constitution and geometrical arrangement of the tissues at the microstructural level. The global material properties are described in terms of the homogenized (effective) parameters. Numerical simulations are focused on the mechanical behaviour of an arterial wall. 相似文献
3.
Christian Artigues Michel Gendreau Louis-Martin Rousseau Adrien Vergnaud 《Computers & Operations Research》2009
We propose exact hybrid methods based on integer linear programming (ILP) and constraint programming (CP) for an integrated employee timetabling and job-shop scheduling problem. Each method we investigate uses a CP formulation associated with an LP relaxation. Under a CP framework, the LP relaxation is integrated into a global constraint using in addition reduced cost-based filtering techniques. We propose two CP formulations of the problem yielding two different LP relaxations. The first formulation is based on a direct representation of the problem. The second formulation is based on a decomposition in intervals of the possible operation starting times. We show the theoretical interest of the decomposition-based representation compared to the direct representation through the analysis of dominant schedules. Computational experiments on a set of randomly generated instances confirm the superiority of the decomposition-based representation. In both cases, the hybrid methods outperform pure CP for employee cost minimization while it is not the case for makespan minimization. The experiments also investigate the interest of the proposed integrated method compared to a sequential approach and show its potential for multiobjective optimization. 相似文献
4.
Given a graph G, an integer k, and a demand set D={(s1,t1),…,(sl,tl)}, the k-Steiner Forest problem finds a forest in graph G to connect at least k demands in D such that the cost of the forest is minimized. This problem was proposed by Hajiaghayi and Jain in SODA’06. Thereafter, using a Lagrangian relaxation technique, Segev et al. gave the first approximation algorithm to this problem in ESA’06, with performance ratio O(n2/3logl). We give a simpler and faster approximation algorithm to this problem with performance ratio O(n2/3logk) via greedy approach, improving the previously best known ratio in the literature. 相似文献
5.
Evaluating the energy of a protein molecule is one of the most computationally costly operations in many protein structure modeling applications. In this paper, we present an efficient implementation of knowledge-based energy functions by taking advantage of the recent Graphics Processing Unit (GPU) architectures. We use DFIRE, a knowledge-based all-atom potential, as an example to demonstrate our GPU implementations on the latest NVIDIA Fermi architecture. A load balancing workload distribution scheme is designed to assign computations of pair-wise atom interactions to threads to achieve perfect or near-perfect load balancing in the symmetric N-body problem in DFIRE. Reorganizing atoms in the protein also improves the cache efficiency in Fermi GPU architecture, which is particularly effective for small proteins. Our DFIRE implementation on GPU (GPU-DFIRE) has exhibited a speedup of up to ∼150 on NVIDIA Quadro FX3800M and ∼250 on NVIDIA Tesla M2050 compared to the serial DFIRE implementation on CPU. Furthermore, we show that protein structure modeling applications, including a Monte Carlo sampling program and a local optimization program, can benefit from GPU-DFIRE with little programming modification but significant computational performance improvement. 相似文献
6.
We describe three computer searches (in PARI/GP, Maple, and Mathematica, respectively) which led to the discovery of a number of identities of Rogers–Ramanujan type and identities of false theta functions. 相似文献
7.
In this paper, we investigate the fault-tolerant capabilities of the k-ary n-cubes for even integer k with respect to the hamiltonian and hamiltonian-connected properties. The k-ary n-cube is a bipartite graph if and only if k is an even integer. Let F be a faulty set with nodes and/or links, and let k?3 be an odd integer. When |F|?2n-2, we show that there exists a hamiltonian cycle in a wounded k-ary n -cube. In addition, when |F|?2n-3, we prove that, for two arbitrary nodes, there exists a hamiltonian path connecting these two nodes in a wounded k-ary n-cube. Since the k-ary n -cube is regular of degree 2n, the degrees of fault-tolerance 2n-3 and 2n-2 respectively, are optimal in the worst case. 相似文献
8.
The local b-function bf,p(s) of an n-variate polynomial f∈C[x] (x=(x1,…,xn)) at a point p∈Cn is constant on each stratum of a stratification of Cn. We propose a new method for computing such a stratification and bf,p(s) on each stratum. In the existing method proposed in Oaku (1997b), a primary ideal decomposition of an ideal in C[x,s] is needed and our experiment shows that the primary decomposition can be a bottleneck for computing the stratification. In our new method, the computation can be done by just computing ideal quotients and examining inclusions of algebraic sets. The precise form of a stratum can be obtained by computing the decomposition of the radicals of the ideals in C[x] defining the stratum. We also introduce various techniques for improving the practical efficiency of the implementation and we show results of computations for some examples. 相似文献
9.
A two-stage model is described where firms take decisions on where to locate their facility and on how much to supply to which market. In such models in literature, typically the market price reacts linearly on supply. Often two competing suppliers are assumed or several that are homogeneous, i.e., their cost structure is assumed to be identical. The focus of this paper is on developing methods to compute equilibria of the model where more than two suppliers are competing that each have their own cost structure, i.e., they are heterogeneous. Analytical results are presented with respect to optimality conditions for the Nash equilibria in the two stages. Based on these analytical results, an enumeration algorithm and a local search algorithm are developed to find equilibria. Numerical cases are used to illustrate the results and the viability of the algorithms. The methods find an improvement of a result reported in literature. 相似文献
10.
It is now a well-established fact that search algorithms can exhibit heavy-tailed behavior. However, the reasons behind this fact are not well understood. We provide a generative search tree model whose distribution of the number of nodes visited during search is formally heavy-tailed. Our model allows us to generate search trees with any degree of heavy-tailedness. We also show how the different regimes observed for the runtime distributions of backtrack search methods across different constrainedness regions of random CSP models can be captured by a mixture of the so-called stable distributions. 相似文献
11.
We consider lattices of regular sets of non negative integers, i.e. of sets definable in Presburger arithmetic. We prove that if such a lattice is closed under decrement then it is also closed under many other functions: quotients by an integer, roots, etc. We characterize the family of such functions. 相似文献
12.
Recent observations of hundreds of hydrogen-rich magnetic white dwarf stars with magnetic fields up to 105 T (103 MG) have called for more comprehensive and accurate databases for wavelengths and oscillator strengths of the H atom in strong magnetic fields for all states evolving from the field-free levels with principal quantum numbers n≤10. We present a code to calculate the energy eigenvalues and wave functions of such states which is capable of covering the entire regime of field strengths B=0 T to B∼109 T. We achieve this high flexibility by using a two-dimensional finite element expansion of the wave functions in terms of B-splines in the directions parallel and perpendicular to the magnetic field, instead of using asymptotically valid basis expansions in terms of spherical harmonics or Landau orbitals. We have paid special attention to the automation of the program such that the data points for the magnetic field strengths at which the energy of a given state are calculated can be selected automatically. Furthermore, an elaborate method for varying the basis parameters is applied to ensure that the results reach a pre-selected precision, which also can be adjusted freely. Energies and wave functions are stored in a convenient format for further analysis, e.g. for the calculation of transition energies and oscillator strengths. The code has been tested to work for 300 states with an accuracy of better than 10−6 Rydberg across several symmetry subspaces over the entire regime of magnetic field strengths. 相似文献
13.
Tag systems were invented by Emil Leon Post and proven recursively unsolvable by Marvin Minsky. These production systems have proved to be very useful in constructing small universal (Turing complete) systems for several different classes of computational systems, including Turing machines, and are thus important instruments for studying limits or boundaries of solvability and unsolvability. Although there are some results on tag systems and their limits of solvability and unsolvability, there are hardly any that consider both the shift number v and the number of symbols μ. This paper aims to contribute to research on limits of solvability and unsolvability for tag systems, taking into account these two parameters. The main result is the reduction of the 3n+1-problem to a surprisingly small tag system. It indicates that the present unsolvability line–defined in terms of μ and v–for tag systems might be significantly decreased. 相似文献
14.
15.
We give simple, self-contained proofs of the basic hardness results for the classes W[t] of the weft hierarchy. We extend these proofs to higher levels of the hierarchy and illuminate the distinctions among its classes. The anti-monotone collapse at W[1,s] and the normalization of weft-t formulas arise as by-products of the proofs. 相似文献
16.
17.
18.
19.
One of the early results concerning the asynchronous π-calculus which significantly contributed to its popularity is the capability of encoding the output prefix of the (choiceless) π-calculus in a natural and elegant way. Encodings of this kind were proposed by Honda and Tokoro, by Nestmann and (independently) by Boudol. We investigate whether the above encodings preserve De Nicola and Hennessy’s testing semantics. In this sense, it turns out that, under some general conditions, no encoding of the output prefix is able to preserve the must testing. This negative result is due to (a) the non-atomicity of the sequences of steps which are necessary in the asynchronous π-calculus to mimic synchronous communication, and (b) testing semantics’ sensitivity to divergence. 相似文献