首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 218 毫秒
1.
Effect of Look-Ahead Depth in Evolutionary Checkers   总被引:1,自引:0,他引:1       下载免费PDF全文
It is intuitive that allowing a deeper search into a game tree will result in a superior player to one that is restricted in the depth of the search that it is allowed to make. Of course, searching deeper into the tree comes at increased computational cost and this is one of the trade-offs that has to be considered in developing a tree-based search algorithm. There has been some discussion as to whether the evaluation function, or the depth of the search, is the main contributory factor in the performance of an evolved checkers player. Some previous research has investigated this question (on Chess and Othello), with differing conclusions. This suggests that different games have different emphases, with respect to these two factors. This paper provides the evidence for evolutionary checkers, and shows that the look-ahead depth (like Chess, perhaps unsurprisingly) is important. This is the first time that such an intensive study has been carried out for evolutionary checkers and given the evidence provided for Chess and Othello this is an important study that provides the evidence for another game. We arrived at our conclusion by evolving various checkers players at different ply depths and by playing them against one another, again at different ply depths. This was combined with the two-move ballot (enabling more games against the evolved players to take place) which provides strong evidence that depth of the look-ahead is important for evolved checkers players.  相似文献   

2.
The Gabor transform has long been recognized as a very useful tool for the joint time and frequency analysis in signal processing.Its real time applications,however,were limited due to the high computational complexity of the Gabor transform algorithms.In this paper,some novel and fast parallel algorithms for the finite discrete Gabor expansion and transform are presented based on multirate filtering.An analysis filter bank is designed for the finite discrete Gabor transform(DGT)and a synthesis filter bank is designed for the finite discrete Gabor expansion(DGE).Each of the parallel channels in the two filter banks has a unified structure and can apply the FFT and the IFFT to reduce its computational load.The computational complexity of each parallel channel does not change as the oversampling rate increases.In fact,it is very low and depends only on the length of the input discrete signal and the number of the Gabor frequency sampling points.The computational complexity of the proposed parallel algorithms is analyzed and compared with that of the major existing parallel algorithms for the finite DGT and DGE.The results indicate that the proposed parallel algorithms for the finite DGT and DGE based on multirate filtering are very attractive for real time signal processing.  相似文献   

3.
Reinforcement learning (RL) in real-world problems requires function approximations that depend on selecting the appropriate feature representations. Representational expansion techniques can make linear approximators represent value functions more effectively; however, most of these techniques function well only for low dimensional problems. In this paper, we present the greedy feature replacement (GFR), a novel online expansion technique, for value-based RL algorithms that use binary features. Given a simple initial representation, the feature representation is expanded incrementally. New feature dependencies are added automatically to the current representation and conjunctive features are used to replace current features greedily. The virtual temporal difference (TD) error is recorded for each conjunctive feature to judge whether the replacement can improve the approximation. Correctness guarantees and computational complexity analysis are provided for GFR. Experimental results in two domains show that GFR achieves much faster learning and has the capability to handle large-scale problems.  相似文献   

4.
Kernel Projection Algorithm for Large-Scale SVM Problems   总被引:5,自引:0,他引:5       下载免费PDF全文
Support Vector Machine (SVM) has become a very effective method in statistical machine learning and it has proved that training SVM is to solve Nearest Point pair Problem (NPP) between two disjoint closed convex sets.Later Keerthi pointed out that it is difficult to apply classical excellent geometric algorithms directly to SVM and so designed a new geometric algorithm for SVM.In this article,a new algorithm for geometrically solving SVM,Kernel Projection Algorithm,is presented based on the theorem on fixed-points of projection mapping.This new algorithm makes it easy to apply classical geometric algorithms to solving SVM and is more understandable than Keerthi‘s.Experiments show that the new algorithm can also handle large-scale SVM problems.Geometric algorithms for SVM,such as Keerthi‘s algorithm,require that two closed convex sets be disjoint and otherwise the algorithms are meaningless.In this article,this requirement will be guaranteed in theory be using the theoretic result on universal kernel functions.  相似文献   

5.
Data cube pre-computation is an important concept for supporting OLAP (Online Analytical Processing) and has been studied extensively. It is often not feasible to compute a complete data cube due to the huge storage requirement. Recently proposed quotient cube addressed this issue through a partitioning method that groups cube cells into equivalence partitions. Such an approach not only is useful for distributive aggregate functions such as SUM but also can be applied to the maintenance of holistic aggregate functions like MEDIAN which will require the storage of a set of tuples for each equivalence class. Unfortunately, as changes are made to the data sources, maintaining the quotient cube is non-trivial since the partitioning of the cube cells must also be updated. In this paper, the authors design incremental algorithms to update a quotient cube efficiently for both SUM and MEDIAN aggregate functions. For the aggregate function SUM, concepts are borrowed from the principle of Galois Lattice to develop CPU-efficient algorithms to update a quotient cube. For the aggregate function MEDIAN, the concept of a pseudo class is introduced to further reduce the size of the quotient cube, Coupled with a novel sliding window technique, an efficient algorithm is developed for maintaining a MEDIAN quotient cube that takes up reasonably small storage space. Performance study shows that the proposed algorithms are efficient and scalable over large databases.  相似文献   

6.
Most of the earlier work on clustering is mainly focused on numerical data the inherent geometric properties of which can be exploited to naturally define distance functions between the data points. However, the computational cost makes most of the previous algorithms unacceptable for clustering very large databases. The k-means algorithm is well known for its efficiency in this respect. At the same time, working only on numerical data prohibits them from being used for clustering categorical data. This paper shows how to apply the notion of "cluster centers" to a dataset of categorical objects, and a k-means-like algorithm for clustering categorical data is introduced.  相似文献   

7.
Spiking neural P systems with weights(WSN P systems,for short) are a new variant of spiking neural P systems,where the rules of a neuron are enabled when the potential of that neuron equals a given value.It is known that WSN P systems are universal by simulating register machines. However,in these universal systems,no bound is considered on the number of neurons and rules. In this work,a restricted variant of WSN P systems is considered,called simple WSN P systems,where each neuron has only one rule. The complexity parameter,the number of neurons,to construct a universal simple WSN P system is investigated. It is proved that there is a universal simple WSN P system with 48 neurons for computing functions; as generator of sets of numbers,there is an almost simple(that is,each neuron has only one rule except that one neuron has two rules) and universal WSN P system with 45 neurons.  相似文献   

8.
In this paper,the relationship between argumentation and closed world reasoning for disjunctive information is studied.In particular,the authors propose a simple and intuitive generalization of the closed world assumption(CWA) for general disjunctive deductive databases(with default negation).This semantics,called DCWA,allows a natural argumentation-based interpretation and can be used to represent reasoning for disjunctive information.We compare DCWA with GCWA and prove that DCWA extends Minker‘s GCWA to the class of disjunctive databases with defacult negation.Also we compare our semantics with some related approaches.In addition,the computational complexity of DCWA is investigated.  相似文献   

9.
关于二维主成分分析方法的研究   总被引:1,自引:1,他引:0  
The principal component analysis (PCA), or the eigenfaces method, is a de facto standard in human face recognition. Numerous algorithms tried to generalize PCA in different aspects. More recently, a technique called two-dimensional PCA (2DPCA) was proposed to cut the computational cost of the standard PCA. Unlike PCA that treats images as vectors, 2DPCA views an image as a matrix. With a properly defined criterion, 2DPCA results in an eigenvalue problem which has a much lower dimensionality than that of PCA. In this paper, we show that 2DPCA is equivalent to a special case of an existing feature extraction method, i.e., the block-based PCA. Using the FERET database, extensive experimental results demonstrate that block-based PCA outperforms PCA on datasets that consist of relatively simple images for recognition, while PCA is more robust than 2DPCA in harder situations.  相似文献   

10.
Producing sentences from a grammar, according to various criteria, is required in many applications. It is also a basic building block for grammar engineering. This paper presents a toolkit for context-free grammars, which mainly consists of several algorithms for sentence generation or enumeration and for coverage analysis for context-free grammars. The toolkit deals with general context-free grammars. Besides providing implementations of algorithms, the toolkit also provides a simple graphical user interface, through which the user can use the toolkit directly. The toolkit is implemented in Java and is available at http://lcs.ios.ac.cn/~zhiwu/toolkit.php. In the paper, the overview of the toolkit and the major algorithms implemented in the toolkit are presented, and experimental results and preliminary applications of the toolkit are also contained.  相似文献   

11.
Linear‐time computational techniques based on the structure of an evidence space have been developed for combining multiple pieces of evidence using Dempster's rule (orthogonal sum), which is available on a number of contending hypotheses. They offer a means of making the computation‐intensive calculations involved more efficient in certain circumstances. Unfortunately, they restrict the orthogonal sum of evidential functions to the dichotomous structure that applies only to elements and their complements. In this paper, we present a novel evidence structure in terms of a triplet and a set of algorithms for evidential reasoning. The merit of this structure is that it divides a set of evidence into three subsets, distinguishing the trivial evidential elements from the important ones—focusing particularly on some elements of an evidence space. It avoids the deficits of the dichotomous structure in representing the preference of evidence and estimating the basic probability assignment of evidence. We have established a formalism for this structure and the general formulae for combining pieces of evidence in the form of the triplet, which have been theoretically and empirically justified. © 2008 Wiley Periodicals, Inc.  相似文献   

12.
In many domains when we have several competing classifiers available we want to synthesize them or some of them to get a more accurate classifier by a combination function. In this paper we propose a ‘class-indifferent’ method for combining classifier decisions represented by evidential structures called triplet and quartet, using Dempster's rule of combination. This method is unique in that it distinguishes important elements from the trivial ones in representing classifier decisions, makes use of more information than others in calculating the support for class labels and provides a practical way to apply the theoretically appealing Dempster-Shafer theory of evidence to the problem of ensemble learning. We present a formalism for modelling classifier decisions as triplet mass functions and we establish a range of formulae for combining these mass functions in order to arrive at a consensus decision. In addition we carry out a comparative study with the alternatives of simplet and dichotomous structure and also compare two combination methods, Dempster's rule and majority voting, over the UCI benchmark data, to demonstrate the advantage our approach offers.  相似文献   

13.
In recent years, a great attention has been paid to skyline computation over uncertain data. In this paper, we study how to conduct advanced skyline analysis over uncertain databases where uncertainty is modeled thanks to the evidence theory (a.k.a., belief functions theory). We particularly tackle an important issue, namely the skyline stars (denoted by SKY2) over the evidential data. This kind of skyline aims at retrieving the best evidential skyline objects (or the stars). Efficient algorithms have been developed to compute the SKY2. Extensive experiments have demonstrated the efficiency and effectiveness of our proposed approaches that considerably refine the huge skyline. In addition, the conducted experiments have shown that our algorithms significantly outperform the basic skyline algorithms in terms of CPU and memory costs.  相似文献   

14.
This paper presents two algorithms combining GRASP and Tabu Search for solving the Unconstrained Binary Quadratic Programming (UBQP) problem. We first propose a simple GRASP-Tabu Search algorithm working with a single solution and then reinforce it by introducing a population management strategy. Both algorithms are based on a dedicated randomized greedy construction heuristic and a tabu search procedure. We show extensive computational results on two sets of 31 large random UBQP instances and one set of 54 structured instances derived from the MaxCut problem. Comparisons with state-of-the-art algorithms demonstrate the efficacy of our proposed algorithms in terms of both solution quality and computational efficiency. It is noteworthy that the reinforced GRASP-Tabu Search algorithm is able to improve the previous best known results for 19 MaxCut instances.  相似文献   

15.
分布式存储系统的哈希算法研究   总被引:1,自引:0,他引:1  
针对分布式存储系统中如何实现数据在物理存储上的均匀分布和高效定位的问题,对多种哈希算法展开研究,提出了衡量分布式存储系统哈希算法优劣的标准;从散列分布性、哈希冲突和计算效率等多个维度对这些哈希算法进行分析比较,指出各种哈希算法的应用场景;结合分布式存储系统的应用,给出最优的哈希算法选择。实验结果证明,Davies-Meyer算法具有很好的均匀分布性和很高的计算效率,很适合分布式存储系统的应用。  相似文献   

16.
A matrix method is proposed for constructing a simple order-32 integer cosine transform. Based on the method, a one-norm simple order-32 integer transform is constructed and its fast algorithms of low computational complexity are developed that require only integer operations and whose computational complexity is 4.9 times less than that of well-known algorithms and is 21.6 times less than that of the standard H.265. This transform is close to the discrete cosine transform and has good coding performance.  相似文献   

17.
Recently, the residue number system (RNS) has been intensively studied. The Chinese remainder theorem (CRT) is a solution to the conversion problem of a number to RNS with a general moduli set. This paper introduces the generalized CRT (GCRT) with parallel algorithms used for the conversion. The GCRT differs from the CRT because it has the advantage of having more applications than does the CRT. The GCRT, however, has a disadvantage in computational performance. To remedy this shortcoming, this paper proposes algorithms that calculate concurrently for some non-related program fragments of GCRT computation. These proposed algorithms also allow the GCRT to compute more efficiently.  相似文献   

18.
Max-min surrogate-assisted evolutionary algorithm for robust design   总被引:2,自引:0,他引:2  
Solving design optimization problems using evolutionary algorithms has always been perceived as finding the optimal solution over the entire search space. However, the global optima may not always be the most desirable solution in many real-world engineering design problems. In practice, if the global optimal solution is very sensitive to uncertainties, for example, small changes in design variables or operating conditions, then it may not be appropriate to use this highly sensitive solution. In this paper, we focus on combining evolutionary algorithms with function approximation techniques for robust design. In particular, we investigate the application of robust genetic algorithms to problems with high dimensions. Subsequently, we present a novel evolutionary algorithm based on the combination of a max-min optimization strategy with a Baldwinian trust-region framework employing local surrogate models for reducing the computational cost associated with robust design problems. Empirical results are presented for synthetic test functions and aerodynamic shape design problems to demonstrate that the proposed algorithm converges to robust optimum designs on a limited computational budget.  相似文献   

19.
Large-scale optimal control problems arise in several different contexts from a variety of applications. Many general computational algorithms have been proposed for the solution of these problems and a large literature on the subject abounds. To assimilate this material, it is necessary to overcome several difficulties. For example, the large-scale nature of the problems induces a complicated notation and specialized jargon that sometimes confuses the basic issues. Further, the ad hoc nature of the development of many algorithms tends to add an air of mystery that is difficult to dispel. Consequently, it is not always apparent that there are simple concepts and principles that underlie most, if not all, of these algorithms. In this paper, the general taxonomic scheme proposed by Geoffrion is exploited to provide a systematic framework for viewing computational algorithms for solving large-scale, optimal control problems. To demonstrate the utility of the viewpoint, a new algorithm for solving large-scale, discrete-time nonlinear optimal control problems is presented. The development is directed by the simple concepts and principles that underlie the proposed taxonomy.  相似文献   

20.
It has been observed that mathematical programming algorithms will often require considerably less time for solution when started at a near-optimal solution vector. This paper investigates the improvements in computational efficiency when two set-covering algorithms are initialized via solutions provided by a simple heuristic program. It is observed that reduced computational time (where this computation time includes that used in obtaining the initial heuristic solution) and computation time variance and increased problem size capability are to be expected for the combination of set-covering algorithms and heuristic program investigated.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号