首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In tree(or graph)search,most algorithms mainly use the local heuristic information of each individual node.But in he statistical heuristic search algorithms the global information about subtrees is usesd effecively so that the computational complexity is greatly reduced.In this paper the problem of how the global information can be extracted from the local one is discussed.Some features of SA are also concerned.  相似文献   

2.
This paper presents a new class of heuristics which embed an exact algorithm within the framework of a local search heuristic. This approach was inspired by related heuristics which we developed for a practical problem arising in electronics manufacture. The basic idea of this heuristic is to break the original problem into small subproblems having similar properties to the original problem. These subproblems are then solved using time intensive heuristic approaches or exact algorithms and the solution is re-embedded into the original problem. The electronics manufacturing problem where we originally used the embedded local search approach, contains the Travelling Salesman Problem (TSP) as a major subproblem. In this paper we further develop our embedded search heuristic, HyperOpt, and investigate its performance for the TSP in comparison to other local search based approaches. We introduce an interesting hybrid of HyperOpt and 3-opt for asymmetric TSPs which proves more efficient than HyperOpt or 3-opt alone. Since pure local search seldom yields solutions of high quality we also investigate the performance of the approaches in an iterated local search framework. We examine iterated approaches of Large-Step Markov Chain and Variable Neighbourhood Search type and investigate their performance when used in combination with HyperOpt. We report extensive computational results to investigate the performance of our heuristic approaches for asymmetric and Euclidean Travelling Salesman Problems. While for the symmetric TSP our approaches yield solutions of comparable quality to 2-opt heuristic, the hybrid methods proposed for asymmetric problems seem capable of compensating for the time intensive embedded heuristic by finding tours of better average quality than iterated 3-opt in many less iterations and providing the best heuristic solutions known, for some instance classes.  相似文献   

3.
Most feature selection algorithms based on information-theoretic learning (ITL) adopt ranking process or greedy search as their searching strategies. The former selects features individually so that it ignores feature interaction and dependencies. The latter heavily relies on the search paths, as only one path will be explored with no possible back-track. In addition, both strategies typically lead to heuristic algorithms. To cope with these problems, this article proposes a novel feature selection framework based on correntropy in ITL, namely correntropy based feature selection using binary projection (BPFS). Our framework selects features by projecting the original high-dimensional data to a low-dimensional space through a special binary projection matrix. The formulated objective function aims at maximizing the correntropy between selected features and class labels. And this function can be efficiently optimized via standard mathematical tools. We apply the half-quadratic method to optimize the objective function in an iterative manner, where each iteration reduces to an assignment subproblem which can be highly efficiently solved with some off-the-shelf toolboxes. Comparative experiments on six real-world datasets indicate that our framework is effective and efficient.  相似文献   

4.
算法智能推荐是超启发式算法研究领域一个重要分支,其目标是从众多"在线"算法中自动选择出最适于当前问题的算法,从而大大提升解决问题的效率。基于此提出并验证了一种优化算法智能推荐系统,理论依据是无免费午餐定理和Rice算法选择框架,并假设问题特征与算法性能表现之间存在潜在关联关系,从而可以把算法推荐问题转换为一个多分类问题。为了验证假设的成立,以多模式资源约束项目调度问题为测试样本数据集,以粒子群、模拟退火、禁忌搜索和人工蜂群等元启发式优化算法为推荐对象,以支持向量机多分类策略实现算法的分类推荐。交叉验证结果表明,推荐准确率均在90%以上,各项评价指标表现优秀。  相似文献   

5.
We consider a special case of heuristics, namely numeric heuristic evaluation functions, and their use in artificial intelligence search algorithms. The problems they are applied to fall into three general classes: single-agent path-finding problems, two-player games, and constraint-satisfaction problems. In a single-agent path-finding problem, such as the Fifteen Puzzle or the travelling salesman problem, a single agent searches for a shortest path from an initial state to a goal state. Two-player games, such as chess and checkers, involve an adversarial relationship between two players, each trying to win the game. In a constraint-satisfaction, problem, such as the 8-Queens problem, the task is to find a state that satisfies a set of constraints. All of these problems are computationally intensive, and heuristic evaluation functions are used to reduce the amount of computation required to solve them. In each case we explain the nature of the evaluation functions used, how they are used in search algorithms, and how they can be automatically learned or acquired.  相似文献   

6.
Real-valued heuristic functions have been extensively used as a means for constraining search in combinatorially large problem spaces. In this paper we look at an alternative approach, called strategic search, in which heuristic information is expressed in the form of problem specific strategies. A strategy is considered to be a (possibly partial) function which for a given state in some problem domain returns sequences of states over the problem domain. The strategy is intended to guide one towards a goal state, but there is no guarantee of success. Strategic search generates a search graph by following some strategy or set of strategies, backtracking to previous choice points when the current strategy fails. We first examine algorithms for performing strategic search using both deterministic and non-deterministic (multiple) strategies. Some examples are given which indicate that strategic search can outperform standard heuristic search methods. We then describe some algorithms which are shown to be admissible when the strategy satisfies certain conditions. The construction of strategies is also considered, and means for acquiring strategic knowledge both from analogous problems and from example execution traces are described. Finally, we indicate how these techniques can be extended to hierarchically organized problem spaces and show how meta-level strategies can be used to guide the application of object level strategies.  相似文献   

7.
Clustering is a powerful machine learning technique that groups “similar” data points based on their characteristics. Many clustering algorithms work by approximating the minimization of an objective function, namely the sum of within-the-cluster distances between points. The straightforward approach involves examining all the possible assignments of points to each of the clusters. This approach guarantees the solution will be a global minimum; however, the number of possible assignments scales quickly with the number of data points and becomes computationally intractable even for very small datasets. In order to circumvent this issue, cost function minima are found using popular local search-based heuristic approaches such as k-means and hierarchical clustering. Due to their greedy nature, such techniques do not guarantee that a global minimum will be found and can lead to sub-optimal clustering assignments. Other classes of global search-based techniques, such as simulated annealing, tabu search, and genetic algorithms, may offer better quality results but can be too time-consuming to implement. In this work, we describe how quantum annealing can be used to carry out clustering. We map the clustering objective to a quadratic binary optimization problem and discuss two clustering algorithms which are then implemented on commercially available quantum annealing hardware, as well as on a purely classical solver “qbsolv.” The first algorithm assigns N data points to K clusters, and the second one can be used to perform binary clustering in a hierarchical manner. We present our results in the form of benchmarks against well-known k-means clustering and discuss the advantages and disadvantages of the proposed techniques.  相似文献   

8.
In this paper we apply the kernel search framework to the solution of the strongly NP-hard multi-dimensional knapsack problem (MKP). Kernel search is a heuristic framework based on the identification of a restricted set of promising items (kernel) and on the exact solution of ILP sub-problems. Initially, the continuous relaxation of the MKP, solved on the complete set of available items, is used to identify the initial kernel. Then, a sequence of ILP sub-problems are solved, where each sub-problem is restricted to the present kernel and to a subset of other items. Each ILP sub-problem may find better solutions with respect to the previous one and identify further items to insert into the kernel. The kernel search was initially proposed to solve a complex portfolio optimization problem. In this paper we show that the method has general key features that make it appropriate to solve other combinatorial problems using binary variables to model the decisions to select or not items. We adapt the kernel search to the solution of MKP and show that the method is very effective and efficient with respect to known problem-specific approaches. Moreover, the best known values of some MKP benchmark problems from the MIPLIB library have been improved.  相似文献   

9.
A Memetic Approach to the Nurse Rostering Problem   总被引:3,自引:0,他引:3  
Constructing timetables of work for personnel in healthcare institutions is known to be a highly constrained and difficult problem to solve. In this paper, we discuss a commercial system, together with the model it uses, for this rostering problem. We show that tabu search heuristics can be made effective, particularly for obtaining reasonably good solutions quickly for smaller rostering problems. We discuss the robustness issues, which arise in practice, for tabu search heuristics. This paper introduces a range of new memetic approaches for the problem, which use a steepest descent improvement heuristic within a genetic algorithm framework. We provide empirical evidence to demonstrate the best features of a memetic algorithm for the rostering problem, particularly the nature of an effective recombination operator, and show that these memetic approaches can handle initialisation parameters and a range of instances more robustly than tabu search algorithms, at the expense of longer solution times. Having presented tabu search and memetic approaches (both with benefits and drawbacks) we finally present an algorithm that is a hybrid of both approaches. This technique produces better solutions than either of the earlier approaches and it is relatively unaffected by initialisation and parameter changes, combining some of the best features of each approach to create a hybrid which is greater than the sum of its component algorithms.  相似文献   

10.
As is well known, greedy algorithm is usually used as local optimization method in many heuristic algorithms such as ant colony optimization, taboo search, and genetic algorithms, and it is significant to increase the convergence speed and learning accuracy of greedy search in the space of equivalence classes of Bayesian network structures. An improved algorithm, I-GREEDY-E is presented based on mutual information and conditional independence tests to firstly make a draft about the real network, and then greedily explore the optimal structure in the space of equivalence classes starting from the draft. Numerical experiments show that both the BIC score and structure error have some improvement, and the number of iterations and running time are greatly reduced. Therefore the structure with highest degree of data matching can be relatively faster determined by the improved algorithm.  相似文献   

11.
In this paper we propose a feature selection method for symbolic interval data based on similarity margin. In this method, classes are parameterized by an interval prototype based on an appropriate learning process. A similarity measure is defined in order to estimate the similarity between the interval feature value and each class prototype. Then, a similarity margin concept has been introduced. The heuristic search is avoided by optimizing an objective function to evaluate the importance (weight) of each interval feature in a similarity margin framework. The experimental results show that the proposed method selects meaningful features for interval data. In particular, the method we propose yields a significant improvement on classification task of three real-world datasets.  相似文献   

12.
The cumulative capacitated vehicle routing problem (CCVRP) is a relatively new version of the classical capacitated vehicle routing problem, and it is equivalent to a traveling repairman problem with capacity constraints and a homogeneous vehicle fleet, which aims to minimize the total arrival time at customers. Many real‐world applications can be modeled by this problem, such as the important application resulting from the humanitarian aid following a natural disaster. In this paper, two heuristics are proposed. The first one is a constructive heuristic to generate an initial solution and the second is the skewed variable neighborhood search (SVNS) heuristic. The SVNS algorithm starts with the initial solution. At each iteration, the perturbation phase and the local search phase are used to improve the solution of the CCVRP, and the distance function in acceptance criteria phase is used to improve the exploration of faraway valleys. This algorithm is applied to a set of benchmarks, and the comparison results show that the proposed algorithms provide better solutions than those reported in the previous literature on memetic algorithms and adaptive large neighborhood search heuristics.  相似文献   

13.
The relaxed plan heuristic is a domain-independent heuristic for automated planning that computes an estimate of the cost for achieving the goals from a given state. This heuristic is based on the idea of solving a relaxed version of the planning task. Due to the great size of the state space, most heuristic search algorithms in planning suffer from scalability problems. These algorithms have to evaluate a great amount of states, and the time devoted to heuristic evaluations is one of the causes of the scalability problems. We argue that one way to lighten this problem is breaking ties in the heuristic value using additional information computed during the relaxed plan construction. We add a complementary value to the heuristic, allowing algorithms to discriminate between states with relaxed plans of the same length but with a different difficulty. The experimental evaluation in some planning benchmarks shows that the modification to the original heuristic can reduce the number of evaluated nodes for the most common algorithms used in heuristic planning.  相似文献   

14.
Local search is an emerging paradigm for combinatorial search which has recently been shown to be very effective for a large number of combinatorial problems. It is based on the idea of navigating the search space by iteratively stepping from one solution to one of its neighbors, which are obtained by applying a simple local change to it. In this paper we present LOCAL++, an object‐oriented framework to be used as a general tool for the development and implementation of local search algorithms in C++. The framework comprises a hierarchy of abstract template classes, one for each local search technique taken into account (i.e. hill‐climbing, simulated annealing and tabu search). Each class specifies and implements the invariant part of the algorithm built according to the technique, and is supposed to be specialized by a concrete class once a given search problem is considered, so as to implement the problem‐dependent part of the algorithm. LOCAL++ comprises also a set of abstract classes for creating new techniques by combining different search techniques and different neighborhood relations. The architecture of LOCAL++ provides a principled modularization for the solution of combinatorial search problems, and helps the designer deriving a neat conceptual scheme of the application, thus facilitating the development and debugging phases. LOCAL++ proved to be flexible enough for the implementation of the algorithms solving various scheduling problems. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

15.
Local search is a paradigm for search and optimization problems, which has recently evidenced to be very effective for a large number of combinatorial problems. Despite the increasing interest of the research community in this subject, there is still a lack of a widely‐accepted software tools for local search. We propose EASY LOCAL , an object‐oriented framework for the design and the analysis of local‐search algorithms. The abstract classes that compose the framework specify and implement the invariant part of the algorithm and are meant to be specialized by concrete classes that supply the problem‐dependent part. The framework provides the full control structures of the algorithms, and the user has only to write the problem‐specific code. Furthermore, the framework comes with some tools that simplify the analysis of the algorithms. The architecture of EASY LOCAL provides a principled modularization for the solution of combinatorial problems by local search and helps the user by deriving a neat conceptual scheme of the application. It also supports the design of combinations of basic techniques and/or neighborhood structures. The framework has been tested in some applicative domains and has proved to be flexible enough in the implementation of algorithms for the solution of various scheduling problems. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

16.
In the information retrieval framework, there are problems where the goal is to recover objects of a particular class from big sets of unlabelled objects. In some of these problems, only examples from the class we want to recover are available. For such problems, the machine learning community has developed algorithms that are able to learn binary classifiers in the absence of negative examples. Among them, we can find the positive Bayesian network classifiers, algorithms that induce Bayesian network classifiers from positive and unlabelled examples. The main drawback of these algorithms is that they require some previous knowledge about the a priori probability distribution of the class. In this paper, we propose a wrapper approach to tackle the learning when no such information is available, setting this probability at the optimal value in terms of the recovery of positive examples. The evaluation of classifiers in positive unlabelled learning problems is a non-trivial question. We have also worked on this problem, and we have proposed a new guiding metric to be used in the search for the optimal a priori probability of the positive class that we have called the pseudo F. We have empirically tested the proposed metric and the wrapper classifiers on both synthetic and real-life datasets. The results obtained in this empirical comparison show that the wrapper Bayesian network classifiers provide competitive results, particularly when the actual a priori probability of the positive class is high.  相似文献   

17.
Artificial Intelligence (AI) techniques are utilized widely in the field of Expert Systems (ES) - as applied to robotics, video games self-driving vehicles and so on. Pathfinding algorithms are a class of heuristic algorithms based on AI techniques which are used in ES as decision making functions for the purpose of solving problems that would otherwise require human competence or expertise. ES fields that use pathfinding algorithms and operate in real-time face many challenges: for example time constraints, optimality and memory overhead for storing the paths which are found. For these algorithms to work, appropriate problem-specific maps must be constructed. In relation to this, the uniform-cost grid set-up is the most appropriate for ES applications. In this method, each node in a graph is represented as a tile, and the weight “between” tiles is set at a constant value, usually this is set to 1. In the state-of-the-art heuristic algorithms used with this data structure, multiplying the heuristic function by a weight greater than one is well-known technique. In this paper, we present three new techniques using various weights to accelerate heuristic search of grid maps. The first such technique is based on the iteration of a heuristic search algorithm associated with weight-set w. The second technique is based on the length between the start node and goal node, which is then associated with w. The last technique is based on the travel cost and is associated with a weight-set α. These techniques are applicable to a wide class of heuristic search algorithms. Therefore, we implement them, here, within the A*, the Bidirectional A* (Bi-A*) and Jump Point Search (JPS) algorithms; thus obtaining a family of new algorithms. Furthermore, it is seen that the use of these new algorithms results in significant improvements over current search algorithms. We evaluate them in path-planning benchmarks and show the amended JPS technique's greater stability, across weight values, over the other two techniques. However, it is also shown that this technique yields poor results in terms of cost solution.  相似文献   

18.
This paper proposes a novel criterion for estimating the redundancy information of selected feature sets in multi-dimensional pattern classification. An appropriate feature selection process typically maximizes the relevancy of features to each class and minimizes the redundancy of features between selected features. Unlike to the relevancy information that can be measured by mutual information, however, it is difficult to estimate the redundancy information because its dynamic range is varied by the characteristics of features and classes.By utilizing the conceptual diagram of the relationship between candidate features, selected features, and class variables, this paper proposes a new criterion to accurately compute the amount of redundancy. Specifically, the redundancy term is estimated by conditional mutual information between selected and candidate features to each class variable, which does not need a cumbersome normalization process as the conventional algorithm does. The proposed algorithm is implemented into a speech/music discrimination system to evaluate classification performance. Experimental results by varying the number of selected features verify that the proposed method shows higher classification accuracy than conventional algorithms.  相似文献   

19.
Stochastic local search algorithms (SLS) have been increasingly applied to approximate solutions of the weighted maximum satisfiability problem (MAXSAT), a model for solutions of major problems in AI and combinatorial optimization. While MAXSAT instances have generally a strong intrinsic dependency between their variables, most of SLS algorithms start the search process with a random initial solution where the value of each variable is generated independently with the same uniform distribution. In this paper, we propose a new SLS algorithm for MAXSAT based on an unconventional distribution known as the Bose-Einstein distribution in quantum physics. It provides a stochastic initialization scheme to an efficient and very simple heuristic inspired by the co-evolution process of natural species and called Extremal Optimization (EO). This heuristic was introduced for finding high quality solutions to hard optimization problems such as colouring and partitioning. We examine the effectiveness of the resulting algorithm by computational experiments on a large set of test instances and compare it with some of the most powerful existing algorithms. Our results are remarkable and show that this approach is appropriate for this class of problems.  相似文献   

20.
The problem of assigning gates to arriving and departing flights is one of the most important problems in airport operations. We take into account the real multi-criteria nature of the problem by optimizing a total of nine gate allocation objectives that are oriented both on convenience for airport/airline services and passenger comfort. As far as we are aware, this is the largest number of objectives jointly optimized in the GAP literature. Given the complexity of the considered problem, we propose a heuristic approach based on the Breakout Local Search (BLS) framework. BLS is a recent variant of the Iterated Local Search (ILS) with a particular focus on the perturbation strategy. Based on some relevant information on search history, it tries to introduce an appropriate degree of diversification by determining adaptively the number and type of moves for the next perturbation phase. Moreover, we use a new memory-based greedy constructive heuristic to generate a starting point for BLS. Benchmark instances used for our experiments and comparisons are based on information provided by Manchester Airport.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号