首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Stasys Jukna 《Algorithmica》2014,69(2):461-492
We consider so-called “incremental” dynamic programming algorithms, and are interested in the number of subproblems produced by them. The classical dynamic programming algorithm for the Knapsack problem is incremental, produces nK subproblems and nK 2 relations (wires) between the subproblems, where n is the number of items, and K is the knapsack capacity. We show that any incremental algorithm for this problem must produce about nK subproblems, and that about nKlogK wires (relations between subproblems) are necessary. This holds even for the Subset-Sum problem. We also give upper and lower bounds on the number of subproblems needed to approximate the Knapsack problem. Finally, we show that the Maximum Bipartite Matching problem and the Traveling Salesman problem require exponential number of subproblems. The goal of this paper is to leverage ideas and results of boolean circuit complexity for proving lower bounds on dynamic programming.  相似文献   

2.
We present algorithms for the following three-dimensional (3D) guillotine cutting problems: unbounded knapsack, cutting stock and strip packing. We consider the case where the items have fixed orientation and the case where orthogonal rotations around all axes are allowed. For the unbounded 3D knapsack problem, we extend the recurrence formula proposed by [1] for the rectangular knapsack problem and present a dynamic programming algorithm that uses reduced raster points. We also consider a variant of the unbounded knapsack problem in which the cuts must be staged. For the 3D cutting stock problem and its variants in which the bins have different sizes (and the cuts must be staged), we present column generation-based algorithms. Modified versions of the algorithms for the 3D cutting stock problems with stages are then used to build algorithms for the 3D strip packing problem and its variants. The computational tests performed with the algorithms described in this paper indicate that they are useful to solve instances of moderate size.  相似文献   

3.
The p-median problem seeks for the location of p facilities on the vertices (customers) of a graph to minimize the sum of transportation costs for satisfying the demands of the customers from the facilities. In many real applications of the p-median problem the underlying graph is disconnected. That is the case of p-median problem defined over split administrative regions or regions geographically apart (e.g. archipelagos), and the case of problems coming from industry such as the optimal diversity management problem. In such cases the problem can be decomposed into smaller p-median problems which are solved in each component k for different feasible values of pk, and the global solution is obtained by finding the best combination of pk medians. This approach has the advantage that it permits to solve larger instances since only the sizes of the connected components are important and not the size of the whole graph. However, since the optimal number of facilities to select from each component is not known, it is necessary to solve p-median problems for every feasible number of facilities on each component. In this paper we give a decomposition algorithm that uses a procedure to reduce the number of subproblems to solve. Computational tests on real instances of the optimal diversity management problem and on simulated instances are reported showing that the reduction of subproblems is significant, and that optimal solutions were found within reasonable time.  相似文献   

4.
A common way of computing all efficient (Pareto optimal) solutions for a biobjective combinatorial optimisation problem is to compute first the extreme efficient solutions and then the remaining, non-extreme solutions. The second phase, the computation of non-extreme solutions, can be based on a “k-best” algorithm for the single-objective version of the problem or on the branch-and-bound method. A k-best algorithm computes the k-best solutions in order of their objective values. We compare the performance of these two approaches applied to the biobjective minimum spanning tree problem. Our extensive computational experiments indicate the overwhelming superiority of the k-best approach. We propose heuristic enhancements to this approach which further improve its performance.  相似文献   

5.
Multiobjective 0-1 knapsack problem involving multiple knapsacks is a widely studied problem. In this paper, we consider a formulation of the biobjective 0-1 knapsack problem which involves a single knapsack; this formulation is more realistic and has many industrial applications. Though it is formulated using simple linear functions, it is an NP-hard problem. We consider three different types of knapsack instances, where the weight and profit of an item is (i) uncorrelated, (ii) weakly correlated, and (iii) strongly correlated, to obtain generalized results. First, we solve this problem using three well-known multiobjective evolutionary algorithms (MOEAs) and quantify the obtained solution-fronts to observe that they show good diversity and (local) convergence. Then, we consider two heuristics and observe that the quality of solutions obtained by MOEAs is much inferior in terms of the extent of the solution space. Interestingly, none of the MOEAs could yield the entire coverage of the Pareto-front. Therefore, based on the knowledge of the Pareto-front obtained from the heuristics, we incorporate problem-specific knowledge in the initial population and obtain good quality solutions using MOEAs too. We quantify the obtained solution fronts for comparison.The main point we stress with this work is that, for real world applications of unknown nature, it is indeed difficult to realize how good/bad is the quality of the solutions obtained. Conversely, if we know the solution space, it is trivial to obtain the desired set of solutions using MOEAs, which is a paradox in itself.  相似文献   

6.
《Knowledge》2007,20(4):426-436
In attempt to solve multiobjective problems, various mathematical and stochastic methods have been developed. The methods operate based on mathematical models while in most cases these models are drastically simplified imagine of real world problems.In this study, a hybrid intelligent system is used instead of mathematical models. The main core of the system is fuzzy rule base which maps decision space (Z) to solution space (X). The system is designed on noninferior region and gives a big picture of this region in the pattern of fuzzy rules. Since some solutions may be infeasible; then specified feedforward neural network is used to obtain noninferior solutions in an exterior movement.In addition, numerical examples of well-known NP-hard problems (i.e. multiobjective traveling salesman problem and multiobjective knapsack problem) are provided to clarify the accuracy of developed system.  相似文献   

7.
The splitting of a problem into subproblems often involves the same variable appearing in more than one of the subproblems. This makes these subproblems dependent upon one another since a solution to one may not qualify as a solution to another. A two stage method of splitting is described which first obtains solutions by relaxing the dependency requirement and then attempts to reconcile solutions to different subproblems. The method has been realized as part of an automatic theorem proper programmed in lisp which takes advantage of the procedural power that lisp provides. The program has had success with cryptarithmetic problems, problems from the blocks world, and has been used as a subroutine in a plane geometry theorem prover.  相似文献   

8.
In structural optimization, most successful sequential approximate optimization (SAO) algorithms solve a sequence of strictly convex subproblems using the dual of Falk. Previously, we have shown that, under certain conditions, a nonconvex nonlinear (sub)problem may also be solved using the Falk dual. In particular, we have demonstrated this for two nonconvex examples of approximate subproblems that arise in popular and important structural optimization problems. The first is used in the SAO solution of the weight minimization problem, while the topology optimization problem that results from volumetric penalization gives rise to the other. In both cases, the nonconvex subproblems arise naturally in the consideration of the physical problems, so it seems counter productive to discard them in favor of using standard, but less well-suited, strictly convex approximations. Though we have not required that strictly convex transformations exist for these problems in order that they may be solved via a dual approach, we have noted that both of these examples can indeed be transformed into strictly convex forms. In this paper we present both the nonconvex weight minimization problem and the nonconvex topology optimization problem with volumetric penalization as instructive numerical examples to help motivate the use of nonconvex approximations as subproblems in SAO. We then explore the link between convex transformability and the salient criteria which make nonconvex problems amenable to solution via the Falk dual, and we assess the effect of the transformation on the dual problem. However, we consider only a restricted class of problems, namely separable problems that are at least C 1 continuous, and a restricted class of transformations: those in which the functions that represent the mapping are each continuous, monotonic and univariate.  相似文献   

9.
In this paper the minmax (regret) versions of some basic polynomially solvable deterministic network problems are discussed. It is shown that if the number of scenarios is unbounded, then the problems under consideration are not approximable within log1−?K for any ?>0 unless NP⊆DTIME(npolylogn), where K is the number of scenarios.  相似文献   

10.
In this paper, we propose a method to solve exactly the knapsack sharing problem (KSP) by using dynamic programming. The original problem (KSP) is decomposed into a set of knapsack problems. Our method is tested on correlated and uncorrelated instances from the literature. Computational results show that our method is able to find an optimal solution of large instances within reasonable computing time and low memory occupancy.  相似文献   

11.
On a class of branching problems in broadcasting and distribution   总被引:1,自引:0,他引:1  
We introduce the following network optimization problem: given a directed graph with a cost function on the arcs, demands at the nodes, and a single source s, find the minimum cost connected subgraph from s such that its total demand is no less than lower bound D. We describe applications of this problem to disaster relief and media broadcasting, and show that it generalizes several well-known models including the knapsack problem, the partially ordered knapsack problem, the minimum branching problem, and certain scheduling problems. We prove that our problem is strongly NP-complete and give an integer programming formulation. We also provide five heuristic approaches, illustrate them with a numerical example, and provide a computational study on both small and large sized, randomly generated problems. The heuristics run efficiently on the tested problems and provide solutions that, on average, are fairly close to optimal.  相似文献   

12.
In this paper, a simple idea based on midpoint integration rule is utilized to solve a particular class of mechanics problems; namely static problems defined on unbounded domains where the solution is required to be accurate only in an interior (and not in the far field). By developing a finite element mesh that approximates the stiffness of an unbounded domain directly (without approximating the far-field displacement profile first), the current formulation provides a superior alternative to infinite elements (IEs) that have long been used to incorporate unbounded domains into the finite element method (FEM). In contrast to most IEs, the present formulation (a) requires no new shape functions or special integration rules, (b) is proved to be both accurate and efficient, and (c) is versatile enough to handle a large variety of domains including those with anisotropic, stratified media and convex polygonal corners. In addition to this, the proposed model leads to the derivation of a simple error expression that provides an explicit correlation between the mesh parameters and the accuracy achieved. This error expression can be used to calculate the accuracy of a given mesh a-priori. This in-turn, allows one to generate the most efficient mesh capable of achieving a desired accuracy by solving a mesh optimization problem. We formulate such an optimization problem, solve it and use the results to develop a practical mesh generation methodology. This methodology does not require any additional computation on the part of the user, and can hence be used in practical situations to quickly generate an efficient and near optimal finite element mesh that models an unbounded domain to the required accuracy. Numerical examples involving practical problems are presented at the end to illustrate the effectiveness of this method.  相似文献   

13.
《国际计算机数学杂志》2012,89(16):3380-3393
This paper is concerned with a variant of the multiple knapsack problem (MKP), where knapsacks are available by paying certain ‘costs’, and we have a fixed budget to buy these knapsacks. Then, the problem is to determine the set of knapsacks to be purchased, as well as to allocate items into the accepted knapsacks in such a way as to maximize the net total profit. We call this the budget-constrained MKP and present a branch-and-bound algorithm to solve this problem to optimality. We employ the Lagrangian relaxation approach to obtain an upper bound. Together with the lower bound obtained by a greedy heuristic, we apply the pegging test to reduce the problem size. Next, in the branch-and-bound framework, we make use of the Lagrangian multipliers obtained above for pruning subproblems, and at each terminal subproblem, we solve MKP exactly by calling the MULKNAP code [D. Pisinger, An exact algorithm for large multiple knapsack problem, European J. Oper. Res. 114 (1999), pp. 528–541]. Thus, we were able to solve test problems with up to 160,000 items and 150 knapsacks within a few minutes in our computing environment. However, solving instances with relatively large number of knapsacks, when compared with the number of items, still remains hard. This is due to the weakness of MULKNAP to this type of problems, and our algorithm inherits this weakness as well.  相似文献   

14.
The Variable-Sized Bin Packing Problem (abbreviated as VSBPP or VBP) is a well-known generalization of the NP-hard Bin Packing Problem (BP) where the items can be packed in bins of M given sizes. The objective is to minimize the total capacity of the bins used. We present an asymptotic approximation scheme (AFPTAS) for VBP and BP with performance guarantee \(A_{\varepsilon }(I) \leq (1+ \varepsilon )OPT(I) + \mathcal {O}\left ({\log ^{2}\left (\frac {1}{\varepsilon }\right )}\right )\) for any problem instance I and any ε>0. The additive term is much smaller than the additive term of already known AFPTAS. The running time of the algorithm is \(\mathcal {O}\left ({ \frac {1}{\varepsilon ^{6}} \log \left ({\frac {1}{\varepsilon }}\right ) + \log \left ({\frac {1}{\varepsilon }}\right ) n}\right )\) for BP and \(\mathcal {O}\left ({ \frac {1}{{\varepsilon }^{6}} \log ^{2}\left ({\frac {1}{\varepsilon }}\right ) + M + \log \left ({\frac {1}{\varepsilon }}\right )n}\right )\) for VBP, which is an improvement to previously known algorithms. Many approximation algorithms have to solve subproblems, for example instances of the Knapsack Problem (KP) or one of its variants. These subproblems - like KP - are in many cases NP-hard. Our AFPTAS for VBP must in fact solve a generalization of KP, the Knapsack Problem with Inversely Proportional Profits (KPIP). In this problem, one of several knapsack sizes has to be chosen. At the same time, the item profits are inversely proportional to the chosen knapsack size so that the largest knapsack in general does not yield the largest profit. We introduce KPIP in this paper and develop an approximation scheme for KPIP by extending Lawler’s algorithm for KP. Thus, we are able to improve the running time of our AFPTAS for VBP.  相似文献   

15.
We investigate the problem of scheduling n jobs in s-stage hybrid flowshops with parallel identical machines at each stage. The objective is to find a schedule that minimizes the sum of weighted completion times of the jobs. This problem has been proven to be NP-hard. In this paper, an integer programming formulation is constructed for the problem. A new Lagrangian relaxation algorithm is presented in which precedence constraints are relaxed to the objective function by introducing Lagrangian multipliers, unlike the commonly used method of relaxing capacity constraints. In this way the relaxed problem can be decomposed into machine type subproblems, each of which corresponds to a specific stage. A dynamic programming algorithm is designed for solving parallel identical machine subproblems where jobs may have negative weights. The multipliers are then iteratively updated along a subgradient direction. The new algorithm is computationally compared with the commonly used Lagrangian relaxation algorithms which, after capacity constraints are relaxed, decompose the relaxed problem into job level subproblems and solve the subproblems by using the regular and speed-up dynamic programming algorithms, respectively. Numerical results show that the new Lagrangian relaxation method produces better schedules in much shorter computation time, especially for large-scale problems.  相似文献   

16.
Nielsen type arguments have been used to prove some problems in free group (e.g., the generalized word problem) [2] to be P-complete. In this paper we extend this approach. Having a Nielsen reduced set of generators for subgroups H and K one can solve a lot of intersection and conjugacy problems in polynomial time in a uniform way.We study the solvability of (i) ?hH, kK: hx = yk in F, and (ii) ? w ∈ F: w?l Hw = K characterize the set of solutions. This leads for (i) to an algorithm for computing a set of generators for HK (and a new proof that free groups have the Howson property). For (ii) this gives a fast solution of Moldavanskii's conjugacy problem; an algorithm for computing the normal hull of H then gives a representation of all solutions. All the algorithms run in polynomial time and the decision problems are proved to be P-complete under log-space reducibility.  相似文献   

17.
The solution of intractable problems implies the use of heuristics. Quantum computers may find use for optimization problems, but have yet to solve any NP-hard problems. This paper demonstrates results in game theory for domain transference and the reuse of problem-solving knowledge through the application of learned heuristics. It goes on to explore the possibilities for the acquisition of heuristics for the solution of the NP-hard TSP problem. Here, it is found that simple heuristics (e.g., pairwise exchange) often work best in the context of more or less sophisticated experimental designs. Often, these problems are not amenable to exclusive logic solutions; but rather, require the application of hybrid approaches predicated on search. In general, such approaches are based on randomization and supported by parallel processing. This means that heuristic solutions emerge from attempts to randomize the search space. The paper goes on to present a constructive proof of the unbounded density of knowledge in support of the Semantic Randomization Theorem (SRT). It highlights this result and its potential impact upon the community of machine learning researchers.  相似文献   

18.
In this paper, we consider the problem of determining a best compromise solution for the multi-objective assignment problem. Such a solution minimizes a scalarizing function, such as the weighted Tchebychev norm or reference point achievement functions. To solve this problem, we resort to a ranking (or k-best) algorithm which enumerates feasible solutions according to an appropriate weighted sum until a condition, ensuring that an optimal solution has been found, is met. The ranking algorithm is based on a branch and bound scheme. We study how to implement efficiently this procedure by considering different algorithmic variants within the procedure: choice of the weighted sum, branching and bounding schemes. We present an experimental analysis that enables us to point out the best variants, and we provide experimental results showing the remarkable efficiency of the procedure, even for large size instances.  相似文献   

19.
In this paper we apply the kernel search framework to the solution of the strongly NP-hard multi-dimensional knapsack problem (MKP). Kernel search is a heuristic framework based on the identification of a restricted set of promising items (kernel) and on the exact solution of ILP sub-problems. Initially, the continuous relaxation of the MKP, solved on the complete set of available items, is used to identify the initial kernel. Then, a sequence of ILP sub-problems are solved, where each sub-problem is restricted to the present kernel and to a subset of other items. Each ILP sub-problem may find better solutions with respect to the previous one and identify further items to insert into the kernel. The kernel search was initially proposed to solve a complex portfolio optimization problem. In this paper we show that the method has general key features that make it appropriate to solve other combinatorial problems using binary variables to model the decisions to select or not items. We adapt the kernel search to the solution of MKP and show that the method is very effective and efficient with respect to known problem-specific approaches. Moreover, the best known values of some MKP benchmark problems from the MIPLIB library have been improved.  相似文献   

20.
Many real-world problems belong to the family of discrete optimization problems. Most of these problems are NP-hard and difficult to solve efficiently using classical linear and convex optimization methods. In addition, the computational difficulties of these optimization tasks increase rapidly with the increasing number of decision variables. A further difficulty can be also caused by the search space being intrinsically multimodal and non-convex. In such a case, it is more desirable to have an effective optimization method that can cope better with these problem characteristics. Binary particle swarm optimization (BPSO) is a simple and effective discrete optimization method. The original BPSO and its variants have been used to solve a number of classic discrete optimization problems. However, it is reported that the original BPSO and its variants are unable to provide satisfactory results due to the use of inappropriate transfer functions. More specifically, these transfer functions are unable to provide BPSO a good balance between exploration and exploitation in the search space, limiting their performances. To overcome this problem, this paper proposes to employ a time-varying transfer function in the BPSO, namely TVT-BPSO. To understand the search behaviour of the TVT-BPSO, we provide a systematic analysis of its exploration and exploitation capability. Our experimental results demonstrate that TVT-BPSO outperforms existing BPSO variants on both low-dimensional and high-dimensional classical 0–1 knapsack problems, as well as a 200-member truss problem, suggesting that TVT-BPSO is able to better scale to high dimensional combinatorial problems than the existing BPSO variants and other metaheuristic algorithms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号