首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We consider a special case of heuristics, namely numeric heuristic evaluation functions, and their use in artificial intelligence search algorithms. The problems they are applied to fall into three general classes: single-agent path-finding problems, two-player games, and constraint-satisfaction problems. In a single-agent path-finding problem, such as the Fifteen Puzzle or the travelling salesman problem, a single agent searches for a shortest path from an initial state to a goal state. Two-player games, such as chess and checkers, involve an adversarial relationship between two players, each trying to win the game. In a constraint-satisfaction, problem, such as the 8-Queens problem, the task is to find a state that satisfies a set of constraints. All of these problems are computationally intensive, and heuristic evaluation functions are used to reduce the amount of computation required to solve them. In each case we explain the nature of the evaluation functions used, how they are used in search algorithms, and how they can be automatically learned or acquired.  相似文献   

2.
Gordon  Franz  Paul   《Journal of Systems and Software》2009,82(9):1403-1418
The use of model checkers for automated software testing has received some attention in the literature: It is convenient because it allows fully automated generation of test suites for many different test objectives. On the other hand, model checkers were not originally meant to be used this way but for formal verification, so using model checkers for testing is sometimes perceived as a “hack”. Indeed, several drawbacks result from the use of model checkers for test case generation. If model checkers were designed or adapted to take into account the needs that result from the application to software testing, this could lead to significant improvements with regard to test suite quality and performance. In this paper we identify the drawbacks of current model checkers when used for testing. We illustrate techniques to overcome these problems, and show how they could be integrated into the model checking process. In essence, the described techniques can be seen as a general road map to turn model checkers into general purpose testing tools.  相似文献   

3.
计算机棋类游戏学习中的自对弈学习指仅依赖行棋过程及最终的输赢结果的学习.整个过程中除下棋规则外不预设任何领域知识,也无专家指导.虽然基于极大极小算法、α-β剪枝算法和蒙特卡洛搜索的自对弈学习已经取得了卓越成果,但是目前仍旧缺乏对于学习样例质量评价的针对性研究.因此,本文首次提出了一种自对弈棋局学习样例质量评价方法,该方...  相似文献   

4.
One of the advantages of immune based approaches is the usage of permanent memory cells. These memory cells cause to omit the process of learning for any played strategy and consequently increasing the speed of decision making process. In the proposed method of this article, memory cells represent actions that have the best local payoff for that current state of the game and are generated simultaneously by learning process. These cells help the decision making system to decide better, considering the previous and future state of the game. The decision making system that is used in this method is based on a Mamdani fuzzy inference engine (FIS). The FIS proposes a best action for the current state of the board by extracting memory cells’ data. Experiments show that the immune based fuzzy agent which is introduced here has better results among other previous methods. This new method can show proper resistance when confronting a player that uses complete game tree remarkably. Also this method is capable of suggesting an action for each state of the game by generating less number of generations in comparison with other evolutionary based methods.  相似文献   

5.
Effect of Look-Ahead Depth in Evolutionary Checkers   总被引:1,自引:0,他引:1       下载免费PDF全文
It is intuitive that allowing a deeper search into a game tree will result in a superior player to one that is restricted in the depth of the search that it is allowed to make. Of course, searching deeper into the tree comes at increased computational cost and this is one of the trade-offs that has to be considered in developing a tree-based search algorithm. There has been some discussion as to whether the evaluation function, or the depth of the search, is the main contributory factor in the performance of an evolved checkers player. Some previous research has investigated this question (on Chess and Othello), with differing conclusions. This suggests that different games have different emphases, with respect to these two factors. This paper provides the evidence for evolutionary checkers, and shows that the look-ahead depth (like Chess, perhaps unsurprisingly) is important. This is the first time that such an intensive study has been carried out for evolutionary checkers and given the evidence provided for Chess and Othello this is an important study that provides the evidence for another game. We arrived at our conclusion by evolving various checkers players at different ply depths and by playing them against one another, again at different ply depths. This was combined with the two-move ballot (enabling more games against the evolved players to take place) which provides strong evidence that depth of the look-ahead is important for evolved checkers players.  相似文献   

6.
Knowledge-based search in competitive domains   总被引:1,自引:0,他引:1  
Artificial intelligence programs operating in competitive domains typically use brute-force search if the domain can be modeled using a search tree or alternately use nonsearch heuristics as in production rule-based expert systems. While brute-force techniques have recently proven to be a viable method for modeling domains with smaller search spaces, such as checkers and chess, the same techniques cannot succeed in more complex domains, such as shogi or go. This research uses a cognitive-based modeling strategy to develop a heuristic search technique based on cognitive thought processes with minimal domain specific knowledge. The cognitive-based search technique provides a significant reduction in search space complexity and, furthermore, enables the search paradigms to be extended to domains that are not typically thought of as search domains such as aerial combat or corporate takeovers.  相似文献   

7.
The evolutionary approach for gaming is different from the traditional one that exploits knowledge of the opening, middle, and endgame stages. It is, therefore, sometimes inefficient to evolve simple heuristics that may be created easily by humans because it is based purely on a bottom-up style of construction. Incorporating domain knowledge into evolutionary computation can improve the performance of evolved strategies and accelerate the speed of evolution by reducing the search space. In this paper, we propose the systematic insertion of opening knowledge and an endgame database into the framework of evolutionary checkers. Also, the common knowledge that the combination of diverse strategies is better than a single best one is included in the middle stage and is implemented using crowding algorithm and a strategy combination scheme. Experimental results show that the proposed method is promising for generating better strategies.  相似文献   

8.
Cut selection based on heuristic information is one of the most fundamental issues in the induction of decision trees with continuous valued attributes. This paper connects the selection of optimal cuts with a class of heuristic information functions together. It statistically shows that both training and testing accuracies in decision tree learning are dependent strongly on the selection of heuristics. A clear relationship between the second-order derivative of heuristic information function and locations of optimal cuts is mathematically derived and further is confirmed experimentally. Incorporating this relationship into a process of building decision trees, we can significantly reduce the number of detected cuts and furthermore improve the generalization of the decision tree.  相似文献   

9.
Hyper-heuristic methodologies have been extensively and successfully used to generate combinatorial optimization heuristics. On the other hand, there have been almost no attempts to build a hyper-heuristic to evolve an algorithm for solving real-valued optimization problems. In our previous research, we succeeded to evolve a Nelder–Mead-like real function minimization heuristic using genetic programming and the primitives extracted from the original Nelder–Mead algorithm. The resulting heuristic was better than the original Nelder–Mead method in the number of solved test problems but it was slower in that it needed considerably more cost function evaluations to solve the problems also solved by the original method. In this paper we exploit grammatical evolution as a hyper-heuristic to evolve heuristics that outperform the original Nelder–Mead method in all aspects. However, the main goal of the paper is not to build yet another real function optimization algorithm but to shed some light on the influence of different factors on the behavior of the evolution process as well as on the quality of the obtained heuristics. In particular, we investigate through extensive evolution runs the influence of the shape and dimensionality of the training function, and the impact of the size limit set to the evolving algorithms. At the end of this research we succeeded to evolve a number of heuristics that solved more test problems and in fewer cost function evaluations than the original Nelder–Mead method. Our solvers are also highly competitive with the improvements made to the original method based on rigorous mathematical convergence proofs found in the literature. Even more importantly, we identified some directions in which to continue the work in order to be able to construct a productive hyper-heuristic capable of evolving real function optimization heuristics that would outperform a human designer in all aspects.  相似文献   

10.
Functional decomposition is a process of splitting a complex circuit into smaller sub-circuits. There exist two major strategies in decomposition, namely, serial and parallel decomposition. In serial decomposition the problem the complex function represented as a truth table with support set variables and partitioned into free and bout set variables. The minterms corresponding to the bound set variables are represented as an equivalent function called the predecessor function. Equivalent minterms of the bound set variables are assigned an output code. The assigned output codes and the free set variable minterms are represented as the successor function. Serial decomposition is further categorized into disjoint and non-disjoint decomposition, when the free and bound set variables are disjoint and non-disjoint respectively. This paper deals with the problem of determining the set of best free and bound variables (variable partitioning problem) for disjoint serial decomposition. Variable partitioning is the first step in decomposition process. An efficient variable partition algorithm is one that determines the set of all free and bound set variables that satisfy the decomposition theorem in minimal time and by exploring the search space effectively. This will allow the decomposition algorithm to determine the best variable partition of a function that results in smaller decomposed functions and with maximum number of do not cares in these functions. Classical approaches to determine the best free and bound set use exhaustive search methods. The time and memory requirements for such approaches are exponential or super exponential.A novel heuristic search approach is proposed to determine the set of good variable partitions in minimal time by minimally exploring the search space. There are two heuristics employed in the proposed search approach, (1) r-admissibility based heuristic or pruned breadth first search (PBFS) approach and (2) Information relation based heuristic or improved pruned breadth first search (IPBFS) approach. The r-admissibility based heuristic is based on r-partition characteristics of the free and bound set variables. The information relation and measure based heuristic is based on information relationship of free and bound set variables that are expressed as r-partition heuristics. The proposed variable partition search approach has been successfully implemented and test with MCNC and Espresso benchmarks and the results indicate that the time complexity is comparable to r-admissible heuristic algorithm and the quality of solution is comparable to exact variable partitioning algorithm. A comparison of PBFS and IPBFS heuristics for certain benchmarks are also discussed in this paper.  相似文献   

11.
Since Samuel's work on checkers over thirty years ago, much effort has been devoted to learning evaluation functions. However, all such methods are sensitive to the feature set chosen to represent the examples. If the features do not capture aspects of the examples significant for problem solving, the learned evaluation function may be inaccurate or inconsistent. Typically, good feature sets are carefully handcrafted and a great deal of time and effort goes into refining and tuning them. This paper presents an automatic knowledge-based method for generating features for evaluation functions. The feature set is developed iteratively: features are generated, then evaluated, and this information is used to develop new features in turn. Both the contribution of a feature and its computational expense are considered in determining whether and how to develop it further.
This method has been applied to two problem-solving domains: the Othello board game and the domain of telecommunications network management. Empirical results show that the method is able to generate many known features and several novel features and to improve concept accuracy in both domains.  相似文献   

12.
In a totally self-checking (TSC) design, the circuit detects errors by monitoring redundantly coded data/control paths through a TSC checker. A problem arises when not all these code words are on the monitored lines during normal operation. A method of designing checkers that solves this difficulty is proposed. The method uses TSC checkers based on flip-flops instead of using the mostly combinational checkers now available. Two design applications are presented: TSC checkers for arithmetic AN codes, and a TSC iterative logic array  相似文献   

13.
The objective of this study is to develop playability heuristics based on a lexical analysis of nouns and adjectives frequently used in online game reviews. Built on a previous lexical analysis of adjectives in online game reviews, it is argued that nouns together with adjectives will likely provide more contextual information than adjectives alone, and therefore the patterns among these words can be used to develop playability heuristics. A revised lexical approach is adopted to analyze nouns and adjectives from 821,122 online reviews. Ninety seven factors emerge from this analysis. Based on the nouns and adjectives highly loaded on these factors, a new process is introduced and 90 playability heuristics are derived. This study significantly expands the current pool of playability heuristics that facilitate the computer game design process. The lexical method adopted in this study demonstrates its effectiveness in developing interface design guidelines based on a large number of online reviews on a system or product. It can be extended to other fields that are human-behavior centered.  相似文献   

14.
As search spaces become larger and as problems scale up, an efficient way to speed up the search is to use a more accurate heuristic function. A better heuristic function might be obtained by the following general idea. Many problems can be divided into a set of subproblems and subgoals that should be achieved. Interactions and conflicts between unsolved subgoals of the problem might provide useful knowledge which could be used to construct an informed heuristic function. In this paper we demonstrate this idea on the graph partitioning problem (GPP). We first show how to format GPP as a search problem and then introduce a sequence of admissible heuristic functions estimating the size of the optimal partition by looking into different interactions between vertices of the graph. We then optimally solve GPP with these heuristics. Experimental results show that our advanced heuristics achieve a speedup of up to a number of orders of magnitude. Finally, we experimentally compare our approach to other states of the art graph partitioning optimal solvers on a number of classes of graphs. The results obtained show that our algorithm outperforms them in many cases.  相似文献   

15.
《Artificial Intelligence》2006,170(6-7):620-642
Deeper searches in game-playing programs relying on the minimax principle generally produce better results. Theoretical analyses, however, suggest that in many cases minimaxing amplifies the noise introduced by the heuristic function used to evaluate the leaves of the game tree, leading to what is known as pathological behavior, where deeper searches produce worse results. In most minimax models analyzed in previous research, positions' true values and sometimes also heuristic values were only losses and wins. In contrast to this, a model is proposed in this paper that uses real numbers for both true and heuristic values. This model did not behave pathologically in the experiments performed. The mechanism that causes deeper searches to produce better evaluations is explained. A comparison with chess is made, indicating that the model realistically reflects position evaluations in chess-playing programs. Conditions under which the pathology might appear in a real-value model are also examined. The essential difference between our real-value model and the common two-value model, which causes the pathology in the two-value model, is identified. Most previous research reports that the pathology tends to disappear when there are dependences between the values of sibling nodes in a game tree. In this paper, another explanation is presented which indicates that in the two-value models the error of the heuristic evaluation was not modeled realistically.  相似文献   

16.
A heuristic evaluation method allows the evaluation of the usability of application domains. To evaluate applications that have specific domain features, researchers can use sets of specific usability heuristics in addition to the well-known (usually Nielsen's) heuristics. Heuristics can also focus on the User eXperience (UX) aspects other than the usability. In a previous work, we proposed a formal methodology for establishing usability/UX heuristics. The methodology has 8 stages including activities to formulate, specify, validate and refine a new set of heuristics for a specific application domain. The methodology was validated through expert opinion and several case studies. Although when specifying the methodology, we explained each of its stages in detail, some activities can be difficult to perform without a guide that helps the researcher determine how the stages should be carried out. This article presents a detailed explanation regarding how to apply each stage of the methodology to create a new set of heuristics for a specific domain. Additionally, this paper explains how to iterate the methodology's stages and when to stop the process of developing new heuristics.  相似文献   

17.
A case-based approach to heuristic planning   总被引:1,自引:1,他引:0  
Most of the great success of heuristic search as an approach to AI Planning is due to the right design of domain-independent heuristics. Although many heuristic planners perform reasonably well, the computational cost of computing the heuristic function in every search node is very high, causing the planner to scale poorly when increasing the size of the planning tasks. For tackling this problem, planners can incorporate additional domain-dependent heuristics in order to improve their performance. Learning-based planners try to automatically acquire these domain-dependent heuristics using previous solved problems. In this work, we present a case-based reasoning approach that learns abstracted state transitions that serve as domain control knowledge for improving the planning process. The recommendations from the retrieved cases are used as guidance for pruning or ordering nodes in different heuristic search algorithms applied to planning tasks. We show that the CBR guidance is appropriate for a considerable number of planning benchmarks.  相似文献   

18.
Many of today’s most successful planners perform a forward heuristic search. The accuracy of the heuristic estimates and the cost of their computation determine the performance of the planner. Thanks to the efforts of researchers in the area of heuristic search planning, modern algorithms are able to generate high-quality estimates. In this paper we propose to learn heuristic functions using artificial neural networks and support vector machines. This approach can be used to learn standalone heuristic functions but also to improve standard planning heuristics. One of the most famous and successful variants for heuristic search planning is used by the Fast-Forward (FF) planner. We analyze the performance of standalone learned heuristics based on nature-inspired machine learning techniques and employ a comparison to the standard FF heuristic and other heuristic learning approaches. In the conducted experiments artificial neural networks and support vector machines were able to produce standalone heuristics of superior accuracy. Also, the resulting heuristics are computationally much more performant than related ones.  相似文献   

19.
《Artificial Intelligence》2002,134(1-2):9-22
We describe a new technique for designing more accurate admissible heuristic evaluation functions, based on pattern databases [J. Culberson, J. Schaeffer, Comput. Intelligence 14 (3) (1998) 318–334]. While many heuristics, such as Manhattan distance, compute the cost of solving individual subgoals independently, pattern databases consider the cost of solving multiple subgoals simultaneously. Existing work on pattern databases allows combining values from different pattern databases by taking their maximum. If the subgoals can be divided into disjoint subsets so that each operator only affects subgoals in one subset, then we can add the pattern-database values for each subset, resulting in a more accurate admissible heuristic function. We used this technique to improve performance on the Fifteen Puzzle by a factor of over 2000, and to find optimal solutions to 50 random instances of the Twenty-Four Puzzle.  相似文献   

20.
LEARNING OF RESOURCE ALLOCATION STRATEGIES FOR GAME PLAYING   总被引:1,自引:0,他引:1  
Human chess players exhibit a large variation in the amount of time they allocate for each move. Yet, the problem of devising resource allocation strategies for game playing has not received enough attention. In this paper we present a framework for studying resource allocation strategies. We define allocation strategy and identify three major types of strategies: static, semi-dynamic, and dynamic. We then describe a method for learning semi-dynamic strategies from self-generated examples. We present an algorithm for assigning classes to the examples based on the utility of investing extra resources. The method was implemented in the domain of checkers, and experimental results show that it is able to learn strategies that improve game-playing performance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号