首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Reliability-based design optimization (RBDO) is a methodology for finding optimized designs that are characterized with a low probability of failure. Primarily, RBDO consists of optimizing a merit function while satisfying reliability constraints. The reliability constraints are constraints on the probability of failure corresponding to each of the failure modes of the system or a single constraint on the system probability of failure. The probability of failure is usually estimated by performing a reliability analysis. During the last few years, a variety of different formulations have been developed for RBDO. Traditionally, these have been formulated as a double-loop (nested) optimization problem. The upper level optimization loop generally involves optimizing a merit function subject to reliability constraints, and the lower level optimization loop(s) compute(s) the probabilities of failure corresponding to the failure mode(s) that govern(s) the system failure. This formulation is, by nature, computationally intensive. Researchers have provided sequential strategies to address this issue, where the deterministic optimization and reliability analysis are decoupled, and the process is performed iteratively until convergence is achieved. These methods, though attractive in terms of obtaining a workable reliable design at considerably reduced computational costs, often lead to premature convergence and therefore yield spurious optimal designs. In this paper, a novel unilevel formulation for RBDO is developed. In the proposed formulation, the lower level optimization (evaluation of reliability constraints in the double-loop formulation) is replaced by its corresponding first-order Karush–Kuhn–Tucker (KKT) necessary optimality conditions at the upper level optimization. Such a replacement is computationally equivalent to solving the original nested optimization if the lower level optimization problem is solved by numerically satisfying the KKT conditions (which is typically the case). It is shown through the use of test problems that the proposed formulation is numerically robust (stable) and computationally efficient compared to the existing approaches for RBDO.  相似文献   

2.
Cai and Selman [CS] proposed the following definition for measuring average computation time: A time function t is T on average over a distribution μ if, for all , , where . This definition results in a modification of Levin's notion of average time [L]. The effect of the modification is to control the rate of convergence of the expressions that define average computation time. With this modification, they proved a hierarchy theorem for average-time complexity that is as tight as the Hartmanis—Stearns [HS] hierarchy theorem for worst-case deterministic time. They also proved that under a fairly reasonable condition on distributions, called condition W, a distributional problem is solvable in average polynomial time under the modification exactly when it is solvable in average polynomial time under Levin's definition. Various notions of reductions, as defined by Levin [L] and others, play a central role in the study of average-case complexity. However, the class of distributional problems that are solvable in average polynomial time under the modification is not closed under the standard reductions. In particular, we prove that there is a distributional problem that is not solvable in average polynomial time under the modification but is reducible, by the identity function, to a distributional problem that is, and whose distribution even satisfies condition W. Received August 26, 1996; revised May 29, 1997.  相似文献   

3.
4.
We consider the sum coloring and sum multicoloring problems on several fundamental classes of graphs, including the classes of interval and k-claw free graphs. We give an algorithm that approximates sum coloring within a factor of 1.796, for any graph in which the maximum k-colorable subgraph problem is polynomially solvable. In particular, this improves on the previous best known ratio of 2 for interval graphs. We introduce a new measure of coloring, robust throughput}, that indicates how quickly the graph is colored, and show that our algorithm approximates this measure within a factor of 1.4575. In addition, we study the contiguous (or non-preemptive) sum multicoloring problem on k-claw free graphs. This models, for example, the scheduling of dependent jobs on multiple dedicated machines, where each job requires the exclusive use of at most k machines. Assuming that k is a fixed constant, we obtain the first constant factor approximation for the problem.  相似文献   

5.
The paper places five different problems (thek-pebble game problem, two problems aboutk finite automata, the reachability problem for Petri nets withk tokens, and the teachability problem for graphs whose k-dimensional edge sets are described by Cartesian products ofk factors) into the hierarchyNL k of problems solvable by nondeterministic Turing machines ink-log2 n space (and binary tape alphabet, to avoid tape speed-up). The results, when combined with the conjecture thatNL i contains problems that requireO(n k ) deterministic time, show that these problems, while inP for every fixed value ofk, have polynomial deterministic time complexities with the degree of the polynomial growing linearly with the parameterk, and hence are, in this sense, gradually intractable.  相似文献   

6.
In this article, we study a single-machine scheduling problem in which the processing time of a job is a nonlinear function of its basic processing time and starting time. The objectives are to minimise the makespan, the sum of weighted completion times and the sum of the kth powers of completion times. We show that the makespan minimisation problem can be solved in polynomial time. However, the total completion time and the sum of the kth powers of completion times minimisation problems can be solved in polynomial time in some cases. Besides, some useful properties are also provided for the sum of weighted completion times problem under certain conditions.  相似文献   

7.
An axis-parallel k-dimensional box is a Cartesian product R 1×R 2×???×R k where R i (for 1≤ik) is a closed interval of the form [a i ,b i ] on the real line. For a graph G, its boxicity box?(G) is the minimum dimension k, such that G is representable as the intersection graph of (axis-parallel) boxes in k-dimensional space. The concept of boxicity finds applications in various areas such as ecology, operations research etc. A number of NP-hard problems are either polynomial time solvable or have much better approximation ratio on low boxicity graphs. For example, the max-clique problem is polynomial time solvable on bounded boxicity graphs and the maximum independent set problem for boxicity d graphs, given a box representation, has a $\lfloor 1+\frac{1}{c}\log n\rfloor^{d-1}An axis-parallel k-dimensional box is a Cartesian product R 1×R 2×⋅⋅⋅×R k where R i (for 1≤ik) is a closed interval of the form [a i ,b i ] on the real line. For a graph G, its boxicity box (G) is the minimum dimension k, such that G is representable as the intersection graph of (axis-parallel) boxes in k-dimensional space. The concept of boxicity finds applications in various areas such as ecology, operations research etc. A number of NP-hard problems are either polynomial time solvable or have much better approximation ratio on low boxicity graphs. For example, the max-clique problem is polynomial time solvable on bounded boxicity graphs and the maximum independent set problem for boxicity d graphs, given a box representation, has a ?1+\frac1clogn?d-1\lfloor 1+\frac{1}{c}\log n\rfloor^{d-1} approximation ratio for any constant c≥1 when d≥2. In most cases, the first step usually is computing a low dimensional box representation of the given graph. Deciding whether the boxicity of a graph is at most 2 itself is NP-hard.  相似文献   

8.
In this paper, fuzzy threshold values, instead of crisp threshold values, have been used for optimal reliability-based multi-objective Pareto design of robust state feedback controllers for a single inverted pendulum having parameters with probabilistic uncertainties. The objective functions that have been considered are, namely, the normalized summation of rising time and overshoot of cart (SROC) and the normalized summation of rising time and overshoot of pendulum (SROP) in the deterministic approach. Accordingly, the probabilities of failure of those objective functions are also considered in the reliability-based design optimization (RBDO) approach. A new multi-objective uniform-diversity genetic algorithm (MUGA) is presented and used for Pareto optimum design of linear state feedback controllers for single inverted pendulum problem. In this way, Pareto front of optimum controllers is first obtained for the nominal deterministic single inverted pendulum using the conflicting objective functions in time domain. Such Pareto front is then obtained for single inverted pendulum having probabilistic uncertainties in its parameters using the statistical moments of those objective functions through a Monte Carlo simulation (MCS) approach. It is shown that multi-objective reliability-based Pareto optimization of the robust state feedback controllers using MUGA with fuzzy threshold values includes those that may be obtained by various crisp threshold values of probability of failures and, thus, remove the difficulty of selecting suitable crisp values. Besides, the multi-objective Pareto optimization of such robust feedback controllers using MUGA unveils some very important and informative trade-offs among those objective functions. Consequently, some optimum robust state feedback controllers can be compromisingly chosen from the Pareto frontiers.  相似文献   

9.
Summary We consider a problem of regrouping data collected in blocks of up to r elements by steps involving the data of not more than k blocks each.The problem consists in giving a lower estimate for the minimum number of steps involved. The estimates are derived from the entropies of certain probability distributions pertinent to the given collections of blocks.  相似文献   

10.
We consider two multicriteria versions of the global minimum cut problem in undirected graphs. In the k-criteria setting, each edge of the input graph has k non-negative costs associated with it. These costs are measured in separate, non-interchangeable, units. In the AND-version of the problem, purchasing an edge requires the payment of all the k costs associated with it. In the OR-version, an edge can be purchased by paying any one of the k costs associated with it. Given k bounds b1,b2,. . . ,bk, the basic multicriteria decision problem is whether there exists a cut C of the graph that can be purchased using a budget of bi units of the ith criterion, for 1 ≤ i ≤ k. We show that the AND-version of the multicriteria global minimum cut problem is polynomial for any fixed number k of criteria. The OR-version of the problem, on the other hand, is NP-hard even for k = 2, but can be solved in pseudo-polynomial time for any fixed number k of criteria. It also admits an FPTAS. Further extensions, some applications, and multicriteria versions of two other optimization problems are also discussed.  相似文献   

11.
In the k-median problem we are given sets of facilities and customers, and distances between them. For a given set F of facilities, the cost of serving a customer u is the minimum distance between u and a facility in F. The goal is to find a set F of k facilities that minimizes the sum, over all customers, of their service costs. Following the work of Mettu and Plaxton, we study the incremental medians problem, where k is not known in advance. An incremental algorithm produces a nested sequence of facility sets F 1F 2⋅⋅⋅F n , where |F k |=k for each k. Such an algorithm is called c -cost-competitive if the cost of each F k is at most c times the optimum k-median cost. We give improved incremental algorithms for the metric version of this problem: an 8-cost-competitive deterministic algorithm, a 2e≈5.44-cost-competitive randomized algorithm, a (24+ε)-cost-competitive, polynomial-time deterministic algorithm, and a 6e+ε≈16.31-cost-competitive, polynomial-time randomized algorithm. We also consider the competitive ratio with respect to size. An algorithm is s -size-competitive if the cost of each F k is at most the minimum cost of any set of k facilities, while the size of F k is at most sk. We show that the optimal size-competitive ratios for this problem, in the deterministic and randomized cases, are 4 and e. For polynomial-time algorithms, we present the first polynomial-time O(log m)-size-approximation algorithm for the offline problem, as well as a polynomial-time O(log m)-size-competitive algorithm for the incremental problem. Our upper bound proofs reduce the incremental medians problem to the following online bidding problem: faced with some unknown threshold T∈ℝ+, an algorithm must submit “bids” b∈ℝ+ until it submits a bid bT, paying the sum of all its bids. We present folklore algorithms for online bidding and prove that they are optimally competitive. We extend some of the above results for incremental medians to approximately metric distance functions and to incremental fractional medians. Finally, we consider a restricted version of the incremental medians problem where k is restricted to one of two given values, for which we give a deterministic algorithm with a nearly optimal cost-competitive ratio. The conference version of this paper appeared in (Chrobak, M., et al. in Lecture Notes in Computer Science, vol. 3887, pp. 311–322, 2006). Research of M. Chrobak supported by NSF Grant CCR-0208856.  相似文献   

12.
Call centre scheduling aims to determine the workforce so as to meet target service levels. The service level depends on the mean rate of arrival calls, which fluctuates during the day, and from day to day. The staff schedule must adjust the workforce period per period during the day, but the flexibility in doing so is limited by the workforce organization by shifts. The challenge is to balance salary costs and possible failures to meet service levels. In this paper, we consider uncertain arrival rates, that vary according to an intra-day seasonality and a global busyness factor. Both factors (seasonal and global) are estimated from past data and are subject to errors. We propose an approach combining stochastic programming and distributionally robust optimization to minimize the total salary costs under service level constraints. The performance of the robust solution is simulated via Monte-Carlo techniques and compared to the solution based on pure stochastic programming.  相似文献   

13.
We discuss some new geometric puzzles and the complexity of their extension to arbitrary sizes. For gate puzzles and two-layer puzzles we prove NP-completeness of solving them. Not only the solution of puzzles leads to interesting questions, but also puzzle design gives rise to interesting theoretical questions. This leads to the search for instances of partition that use only integers and are uniquely solvable. We show that instances of polynomial size exist with this property. This result also holds for partition into k subsets with the same sum: We construct instances of n integers with subset sum O(n k+1), for fixed k. Part of this research was done when the first author was visiting Utrecht University, and during the Dagstuhl Seminar No. 06481 on Geometric Networks and Metric Space Embeddings in 2006.  相似文献   

14.
The problem of coding labeled trees has been widely studied in the literature and several bijective codes that realize associations between labeled trees and sequences of labels have been presented. k-trees are one of the most natural and interesting generalizations of trees and there is considerable interest in developing efficient tools to manipulate this class of graphs, since many NP-Complete problems have been shown to be polynomially solvable on k-trees and partial k-trees. In 1970 Rényi and Rényi generalized the Prüfer code, the first bijective code for trees, to a subset of labeled k-trees. Subsequently, non redundant codes that realize bijection between k-trees (or Rényi k-trees) and a well defined set of strings were produced. In this paper we introduce a new bijective code for labeled k-trees which, to the best of our knowledge, produces the first coding and decoding algorithms running in linear time with respect to the size of the k-tree.  相似文献   

15.
The quickest path problem involving two attributes, the capacity and the lead time, is to find a single path with minimum transmission time. The capacity of each arc is assumed to be deterministic in this problem. However, in many practical networks such as computer networks, telecommunication networks, and logistics networks, each arc is multistate due to failure, maintenance, etc. Such a network is named a multistate flow network. Hence, both the transmission time to deliver data through a minimal path and the minimum transmission time through a multistate flow network are not fixed. In order to reduce the transmission time, the data can be transmitted through k minimal paths simultaneously. The purpose of this paper is to evaluate the probability that d units of data can be transmitted through k minimal paths within time threshold T. Such a probability is called the transmission reliability. A simple algorithm is proposed to generate all lower boundary points for (d, T), the minimal system states satisfying the demand within time threshold. The transmission reliability can be subsequently computed in terms of such points. Another algorithm is further proposed to find the optimal combination of k minimal paths with highest transmission reliability.  相似文献   

16.
《Location Science #》1998,6(1-4):243-256
In a dynamically changing network, the costs or distances between locations are changing in each discrete time period. We consider the location of emergency facilities that must minimize the maximum distance to any customer on the network across all time periods. We call the problem of locating p centers over k underlying networks corresponding to k periods the k-Network p-Center problem. The problem is considered when, in each period, the network satisfies the triangle inequality. In this paper, we provide a polynomial time 3-approximation algorithm for Δ k-Network p-Center for the case k=2. We discuss generalizations inspired by this problem to other optimization problems with multiple underlying networks and the objective of finding a single solution that varies as little as possible from the optimum for each network. The additional combinatorial problems discussed include: sorting; perfect matching; shortest path; minimum spanning tree; and minimum cut. All are shown to be NP-hard for k⩾2.  相似文献   

17.
We study the minimum s-t-cut problem in graphs with costs on the edges in the context of evolutionary algorithms. Minimum cut problems belong to the class of basic network optimization problems that occur as crucial subproblems in many real-world optimization problems and have a variety of applications in several different areas. We prove that there exist instances of the minimum s-t-cut problem that cannot be solved by standard single-objective evolutionary algorithms in reasonable time. On the other hand, we develop a bi-criteria approach based on the famous maximum-flow minimum-cut theorem that enables evolutionary algorithms to find an optimal solution in expected polynomial time.  相似文献   

18.
We investigate the linear complementarity problem with uncertain parameters (ULCP) which affect the linear mapping affinely or quadratically. Assuming that the distribution of the uncertain parameters belongs to some ambiguity set with prescribed partial information, we formulate the ULCP as a distributionally robust optimization reformulation named as the distributionally robust complementarity problem (DRCP), which minimizes the worst case of an expected complementarity measure with a joint chance constraint that the probability of the linear mapping being nonnegative is not less than a given level. Applying the cone dual theory and S-procedure, we conservatively approximate the DRCP as a nonlinear semidefinite programming (NSDP) with bilinear matrix inequalities, which can be solved by the NSDP solver PENLAB. The preliminary numerical test on a constrained stochastic linear quadratic control problem shows that the DRCP as well as the corresponding solution method is promising.  相似文献   

19.
This study focuses on a multi-period inventory problem with capital constraints and demand uncertainties. The multi-period inventory problem is formulated as an optimization model with a joint chance constraint (JCC) requiring the purchase cost for each period not to exceed the available capital with a probability guarantee. To hedge against demand uncertainties, an affinely adjustable robust optimization approach is used to convert the developed model into a robust counterpart. By approximating the JCC under a budgeted uncertainty set to which the demands belong, the robust multi-period inventory model with the JCC is transformed into a linear programming model, which can be solved efficiently. Numerical studies are reported to illustrate the robustness, practicality, and effectiveness of the proposed model and the solution approach. The numerical results show that the proposed model and solution approach outperform the sample average approximation approach. Numerical studies are used further to analyze the impact of the budget coefficient and the upper bound parameter on the inventory costs and the realized capital constraint satisfaction rate. The proposed model and solution approach are further extended to the multi-product case.  相似文献   

20.
Tight Results on Minimum Entropy Set Cover   总被引:1,自引:0,他引:1  
In the minimum entropy set cover problem, one is given a collection of k sets which collectively cover an n-element ground set. A feasible solution of the problem is a partition of the ground set into parts such that each part is included in some of the k given sets. Such a partition defines a probability distribution, obtained by dividing each part size by n. The goal is to find a feasible solution minimizing the (binary) entropy of the corresponding distribution. Halperin and Karp have recently proved that the greedy algorithm always returns a solution whose cost is at most the optimum plus a constant. We improve their result by showing that the greedy algorithm approximates the minimum entropy set cover problem within an additive error of 1 nat =log 2 e bits ≃1.4427 bits. Moreover, inspired by recent work by Feige, Lovász and Tetali on the minimum sum set cover problem, we prove that no polynomial-time algorithm can achieve a better constant, unless P = NP. We also discuss some consequences for the related minimum entropy coloring problem. G. Joret is a Research Fellow of the Fonds National de la Recherche Scientifique (FNRS).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号