首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 625 毫秒
1.
Central to many algorithms for reconfigurable meshes (R-meshes) is the method of divide-and-conquer (DAC). Although there is no provision in this method per se that a problem must be divided into subproblems evenly in size, in practice almost all existing DAC-based R-mesh algorithms divide problems in this fashion. This paper demonstrates that dividing a problem evenly is not necessarily a good approach. For some problems, a natural decomposition scheme may be preferable, in which a problem is divided into subproblems along their "natural" boundaries, often resulting in an irregular decomposition. Taking this approach, we obtain a new R-mesh algorithm that triangulates a set of points on O(1) time. This algorithm is simpler than existing ones. We also obtain an R-mesh algorithm for simple-polygon triangulation, the first such algorithm for this problem.  相似文献   

2.
The paper describes an adaptation of genetic algorithms (GA's) in decomposition-based design of multidisciplinary systems. The coupled multidisciplinary design problem is adaptively deomposed into a number of smaller subproblems, each with fewer design variables, and the design in each subproblem allowed to proceed in parallel. Fewer design variables allow for shorter string lengths to the used in the GA-based optimization in each subproblem, reducing the number of design alternatives to be explored, and hence also reducing the required number of function evaluations to convergence. A novel procedure is proposed to account for interactions between the decomposed subproblems, and is based on the modelling of the biological immune system. This approach also uses the genetic algorithm approach to update in each subproblem the design changes of all other subproblems. The design representation scheme, therefore, is common to both the design optimization step and the procedure required to account for interaction among the subproblems. The decomposition based solution of a dual structural-control design problem is used as a test problem for the proposed approach. The convergence characteristics of the proposed approach are compared against those available from a nondecomposition-based method.  相似文献   

3.
Several decomposition methods have been proposed for the distributed optimal design of quasi-separable problems encountered in Multidisciplinary Design Optimization (MDO). Some of these methods are known to have numerical convergence difficulties that can be explained theoretically. We propose a new decomposition algorithm for quasi-separable MDO problems. In particular, we propose a decomposed problem formulation based on the augmented Lagrangian penalty function and the block coordinate descent algorithm. The proposed solution algorithm consists of inner and outer loops. In the outer loop, the augmented Lagrangian penalty parameters are updated. In the inner loop, our method alternates between solving an optimization master problem and solving disciplinary optimization subproblems. The coordinating master problem can be solved analytically; the disciplinary subproblems can be solved using commonly available gradient-based optimization algorithms. The augmented Lagrangian decomposition method is derived such that existing proofs can be used to show convergence of the decomposition algorithm to Karush–Kuhn–Tucker points of the original problem under mild assumptions. We investigate the numerical performance of the proposed method on two example problems.  相似文献   

4.
Stasys Jukna 《Algorithmica》2014,69(2):461-492
We consider so-called “incremental” dynamic programming algorithms, and are interested in the number of subproblems produced by them. The classical dynamic programming algorithm for the Knapsack problem is incremental, produces nK subproblems and nK 2 relations (wires) between the subproblems, where n is the number of items, and K is the knapsack capacity. We show that any incremental algorithm for this problem must produce about nK subproblems, and that about nKlogK wires (relations between subproblems) are necessary. This holds even for the Subset-Sum problem. We also give upper and lower bounds on the number of subproblems needed to approximate the Knapsack problem. Finally, we show that the Maximum Bipartite Matching problem and the Traveling Salesman problem require exponential number of subproblems. The goal of this paper is to leverage ideas and results of boolean circuit complexity for proving lower bounds on dynamic programming.  相似文献   

5.
动态规划是一种递归求解问题最优解的方法,主要通过求解子问题的解并组合这些解来求解原问题.由于其子问题之间存在大量依赖关系和约束条件,所以验证过程繁琐,尤其对命令式动态规划类算法程序正确性验证是一个难点.基于动态规划类算法Isabelle/HOL函数式建模与验证,通过证明命令式动态规划类算法程序与其的等价性,避免证明正确性时处理复杂的依赖关系和约束条件,提出命令式动态规划类算法程序设计框架及其机械化验证.首先,根据动态规划类算法的优化方法(备忘录方法)和性质(最优子结构性质和子问题重叠性质)描述问题规约、归纳递推关系式和形式化构造出循环不变式,并且基于递推关系式生成IMP (Minimalistic Imperative Programming Language)代码;其次,将问题规约、循环不变式和生成的IMP代码输入VCG (Verification Condition Generator),自动生成正确性的验证条件;然后,在Isabelle/HOL定理证明器中对验证条件进行机械化验证.算法首先设计为命令式动态规划类算法的一般形式,并进一步实例化得到具体算法.最后,例证了所提框架的有效性,为动态规划类算法的自动化推导和验证提供参考价值.  相似文献   

6.
We investigate the problem of scheduling n jobs in s-stage hybrid flowshops with parallel identical machines at each stage. The objective is to find a schedule that minimizes the sum of weighted completion times of the jobs. This problem has been proven to be NP-hard. In this paper, an integer programming formulation is constructed for the problem. A new Lagrangian relaxation algorithm is presented in which precedence constraints are relaxed to the objective function by introducing Lagrangian multipliers, unlike the commonly used method of relaxing capacity constraints. In this way the relaxed problem can be decomposed into machine type subproblems, each of which corresponds to a specific stage. A dynamic programming algorithm is designed for solving parallel identical machine subproblems where jobs may have negative weights. The multipliers are then iteratively updated along a subgradient direction. The new algorithm is computationally compared with the commonly used Lagrangian relaxation algorithms which, after capacity constraints are relaxed, decompose the relaxed problem into job level subproblems and solve the subproblems by using the regular and speed-up dynamic programming algorithms, respectively. Numerical results show that the new Lagrangian relaxation method produces better schedules in much shorter computation time, especially for large-scale problems.  相似文献   

7.
The p-hub center problem is useful for the delivery of perishable and time-sensitive system such as express mail service and emergency service. In this paper, we propose a new fuzzy p-hub center problem, in which the travel times are uncertain and characterized by normal fuzzy vectors. The objective of our model is to maximize the credibility of fuzzy travel times not exceeding a predetermined acceptable efficient time point along all paths on a network. Since the proposed hub location problem is too complex to apply conventional optimization algorithms, we adapt an approximation approach (AA) to discretize fuzzy travel times and reformulate the original problem as a mixed-integer programming problem subject to logic constraints. After that, we take advantage of the structural characteristics to develop a parametric decomposition method to divide the approximate p-hub center problem into two mixed-integer programming subproblems. Finally, we design an improved hybrid particle swarm optimization (PSO) algorithm by combining PSO with genetic operators and local search (LS) to update and improve particles for the subproblems. We also evaluate the improved hybrid PSO algorithm against other two solution methods, genetic algorithm (GA) and PSO without LS components. Using a simulated data set of 10 nodes, the computational results show that the improved hybrid PSO algorithm achieves the better performance than GA and PSO without LS in terms of runtime and solution quality.  相似文献   

8.
In this paper, we propose a tabu search (TS) algorithm for the global planning problem of third generation (3G) universal mobile telecommunications system (UMTS) networks. This problem is composed of three NP-hard subproblems: the cell, the access network and the core network planning subproblems. Therefore, the global planning problem consists in selecting the number, the location and the type of network nodes (including the base stations, the radio network controllers, the mobile switching centers and the serving GPRS (General Packet Radio Service) support nodes) as well as the interconnections between them. After describing our metaheuristic, a systematic set of experiments is designed to assess its performance. The results show that quasi-optimal solutions can be obtained with the proposed approach.  相似文献   

9.
This paper describes in full detail a model of a hierarchical classifier (HC). The original classification problem is broken down into several subproblems and a weak classifier is built for each of them. Subproblems consist of examples from a subset of the whole set of output classes. It is essential for this classification framework that the generated subproblems would overlap, i.e. some individual classes could belong to more than one subproblem. This approach allows to reduce the overall risk. Individual classifiers built for the subproblems are weak, i.e. their accuracy is only a little better than the accuracy of a random classifier. The notion of weakness for a multiclass model is extended in this paper. It is more intuitive than approaches proposed so far. In the HC model described, after a single node is trained, its problem is split into several subproblems using a clustering algorithm. It is responsible for selecting classes similarly classified. The main scope of this paper is focused on finding the most appropriate clustering method. Some algorithms are defined and compared. Finally, we compare a whole HC with other machine learning approaches.  相似文献   

10.
Branch-and-bound algorithms in a system with a two-level memory hierarchy were evaluated. An efficient implementation depends on the disparities in the numbers of subproblems expanded between the depth-first and best-first searches as well as the relative speeds of the main and secondary memories. A best-first search should be used when it expands a much smaller number of subproblems than that of a depth-first search, and the secondary memory is relatively slow. In contrast, a depth-first search should be used when the number of expanded subproblems is close to that of a best-first search. The choice is not as clear for cases in between these cases are studied. Two strategies are proposed and analyzed: a specialized virtual-memory system that matches the architectural design with the characteristics of the existing algorithm, and a modified branch-and-bound algorithm that can be tuned to the characteristic of the problem and the architecture. The latter strategy illustrates that designing a better algorithm is sometimes more effective that tuning the architecture alone  相似文献   

11.
针对多分类不均衡问题,提出了一种新的基于一对一(one-versus-one,OVO)分解策略的方法。首先基于OVO分解策略将多分类不均衡问题分解成多个二值分类问题;再利用处理不均衡二值分类问题的算法建立二值分类器;接着利用SMOTE过抽样技术处理原始数据集;然后采用基于距离相对竞争力加权方法处理冗余分类器;最后通过加权投票法获得输出结果。在KEEL不均衡数据集上的大量实验结果表明,所提算法比其他经典方法具有显著的优势。  相似文献   

12.
13.
分支降阶是目前广泛用于求解组合优化领域中难题的技术之一,该技术的核心思想是将原问题分支成若干个子问题,并递归求解这些子问题。加权分治技术是算法设计和时间复杂度分析中的一种新技术。设计一个基于分支降阶的递归算法求解最大团问题。运用常规技术对该算法进行时间复杂度分析,得出其时间复杂度为[O(1.380np(n)),]其中[p(n)]表示问题规模数[n]的多项式函数。运用加权分治技术对原算法进行时间复杂度分析,将该算法的时间复杂度由原来的[O(1.380np(n))]降为[O(1.325np(n))]。研究结果表明运用加权分治技术能够得到较为精确的时间复杂度。  相似文献   

14.
This paper addresses the problem of moving object reconstruction. Several methods have been published in the past 20 years including stereo reconstruction as well as multi-view factorization methods. In general, reconstruction algorithms compute the 3D structure of the object and the camera parameters in a non-optimal way, and then a nonlinear and numerical optimization algorithm refines the reconstructed camera parameters and 3D coordinates. In this paper, we propose an adjustment method which is the improved version of the well-known Tomasi–Kanade factorization method. The novelty, which yields the high speed of the algorithm, is that the core of the proposed method is an alternation and we give optimal solutions to the subproblems in the alternation. The improved method is discussed here and it is compared to the widely used bundle adjustment algorithm.  相似文献   

15.
We propose an empirical analysis approach for characterizing tradeoffs between different methods for comparing a set of competing algorithm designs. Our approach can provide insight into performance variation both across candidate algorithms and across instances. It can also identify the best tradeoff between evaluating a larger number of candidate algorithm designs, performing these evaluations on a larger number of problem instances, and allocating more time to each algorithm run. We applied our approach to a study of the rich algorithm design spaces offered by three highly-parameterized, state-of-the-art algorithms for satisfiability and mixed integer programming, considering six different distributions of problem instances. We demonstrate that the resulting algorithm design scenarios differ in many ways, with important consequences for both automatic and manual algorithm design. We expect that both our methods and our findings will lead to tangible improvements in algorithm design methods.  相似文献   

16.
This paper develops an integrated model between a production capacity planning and an operational scheduling decision making process in which a no-wait job shop (NWJS) scheduling problem is considered incorporating with controllable processing times. The duration of any operations are assumed to be controllable variables based on the amount of capacity allocated to them, whereas in classical NWJS it is assumed that the machine capacity and hence processing times are fixed and known in advance. The suggested problem which is entitled no-wait job shop crashing (NWJSC) problem is decomposed into the crashing, sequencing and timetabling subproblems. To tackle the addressed NWJSC problem, an improved hybrid timetabling procedure is suggested by employing the concept of both non-delay and enhanced algorithms which provides better solution than each one separately. Furthermore, an effective two-phase genetic algorithm approach is devised integrating with hybrid timetabling to deal with the crashing and sequencing components. The results obtained from experimental evaluations support the outstanding performance of the proposed approach.  相似文献   

17.
This paper presents a new class of heuristics which embed an exact algorithm within the framework of a local search heuristic. This approach was inspired by related heuristics which we developed for a practical problem arising in electronics manufacture. The basic idea of this heuristic is to break the original problem into small subproblems having similar properties to the original problem. These subproblems are then solved using time intensive heuristic approaches or exact algorithms and the solution is re-embedded into the original problem. The electronics manufacturing problem where we originally used the embedded local search approach, contains the Travelling Salesman Problem (TSP) as a major subproblem. In this paper we further develop our embedded search heuristic, HyperOpt, and investigate its performance for the TSP in comparison to other local search based approaches. We introduce an interesting hybrid of HyperOpt and 3-opt for asymmetric TSPs which proves more efficient than HyperOpt or 3-opt alone. Since pure local search seldom yields solutions of high quality we also investigate the performance of the approaches in an iterated local search framework. We examine iterated approaches of Large-Step Markov Chain and Variable Neighbourhood Search type and investigate their performance when used in combination with HyperOpt. We report extensive computational results to investigate the performance of our heuristic approaches for asymmetric and Euclidean Travelling Salesman Problems. While for the symmetric TSP our approaches yield solutions of comparable quality to 2-opt heuristic, the hybrid methods proposed for asymmetric problems seem capable of compensating for the time intensive embedded heuristic by finding tours of better average quality than iterated 3-opt in many less iterations and providing the best heuristic solutions known, for some instance classes.  相似文献   

18.
用Lagrangian松弛法解化工批处理调度问题   总被引:15,自引:2,他引:15  
研究基于Lagrangian松弛法的化工批处理过程的调度方法.建立了化工批处理过 程调度问题的一种混合整数规划(MILP)模型,并通过松弛离散变量和连续变量共存的约束, 将问题分解为一个两层次的优化问题,其中上层是原问题的对偶问题,下层由两个子问题构 成:一个与产品批量有关,另一个确定操作时间表,分别用线性规划和动态规划方法解这两个 子问题.然后从对偶问题的解构作原问题的可行解.数值试验结果证明了该方法的有效性.  相似文献   

19.
The paper presents a multiobjective optimization problem that considers distributing multiple kinds of products from multiple sources to multiple targets. The problem is of high complexity and is difficult to solve using classical heuristics. We propose for the problem a hierarchical cooperative optimization approach that decomposes the problem into low-dimensional subcomponents, and applies Pareto-based particle swarm optimization (PSO) method to the main problem and the subproblems alternately. In particular, our approach uses multiple sub-swarms to evolve the sub-solutions concurrently, controls the detrimental effect of variable correlation by reducing the subproblem objectives, and brings together the results of the sub-swarms to construct effective solutions of the original problem. Computational experiment demonstrates that the proposed algorithm is robust and scalable, and outperforms some state-of-the-art constrained multiobjective optimization algorithms on a set of test problems.  相似文献   

20.
In this paper we present results that extend the sequential quadratic programming (SQP) algorithm with an additional feasibility refinement based on parametric sensitivity derivatives. The refinement is applicable without restriction on the problem dimensions in sparse SQP solvers. Parametric sensitivity analysis is a tool for post optimality analysis of the solution of a nonlinear optimization problem. For the refinement approach we apply this technique on the quadratic subproblems in order to improve the overall algorithm. The sensitivity derivatives required for this approach can be computed without noticeable computational effort as the system of linear equations to be solved coincides with the system already solved for the search direction computation. For similar algorithms in the context of post optimality analysis a linear rate of convergence has been proven and therefore an extrapolation method is applied to speed up the process. The presented algorithm has been integrated into the nonlinear program (NLP) solver WORHP and we perform a numerical study to evaluate different termination criteria for the proposed algorithm. Furthermore, numerical results on the CUTEst test set are shown.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号