首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
An analysis of the alpha-beta pruning algorithm is presented which takes into account both shallow and deep cut-offs. A formula is first developed to measure the average number of terminal nodes examined by the algorithm in a uniform tree of degree n and depth d when ties are allowed among the bottom positions: specifically, all bottom values are assumed to be independent identically distributed random variables drawn from a discrete probability distribution. A worst case analysis over all possible probability distributions is then presented by considering the limiting case when the discrete probability distribution tends to a continuous probability distribution. The branching factor of the alpha-beta pruning algorithm is shown to grow with n as Θ(n/lnn), therefore confirming a claim by Knuth and Moore that deep cut-offs only have a second order effect on the behavior of the algorithm.  相似文献   

2.
We present a distributed algorithm for implementing α-β search on a tree of processors. Each processor is an independent computer with its own memory and is connected by communication lines to each of its nearest neighbors. Measurements of the algorithm's performance on the Arachne distributed operating system are presented. A theoretical model is developed that predicts at least order of k12 speedup with k processors.  相似文献   

3.
An iterative pruning algorithm for feedforward neural networks   总被引:7,自引:0,他引:7  
The problem of determining the proper size of an artificial neural network is recognized to be crucial, especially for its practical implications in such important issues as learning and generalization. One popular approach for tackling this problem is commonly known as pruning and it consists of training a larger than necessary network and then removing unnecessary weights/nodes. In this paper, a new pruning method is developed, based on the idea of iteratively eliminating units and adjusting the remaining weights in such a way that the network performance does not worsen over the entire training set. The pruning problem is formulated in terms of solving a system of linear equations, and a very efficient conjugate gradient algorithm is used for solving it, in the least-squares sense. The algorithm also provides a simple criterion for choosing the units to be removed, which has proved to work well in practice. The results obtained over various test problems demonstrate the effectiveness of the proposed approach.  相似文献   

4.
Identifying an appropriate architecture of an artificial neural network (ANN) for a given task is important because learning and generalisation of an ANN is affected by its structure. In this paper, an online pruning strategy is proposed to participate in the learning process of two constructive networks, i.e. fuzzy ARTMAP (FAM) and fuzzy ARTMAP with dynamic decay adjustment (FAMDDA), and the resulting hybrid networks are called FAM/FAMDDA with temporary nodes (i.e. FAM-T and FAMDDA-T, respectively). FAM-T and FAMDDA-T possess a capability of reducing the network complexity online by removing unrepresentative neurons. The performances of FAM-T and FAMDDA-T are evaluated and compared with those of FAM and FAMDDA using a total of 13 benchmark data sets. To demonstrate the applicability of FAM-T and FAMDDA-T, a real fault detection and diagnosis task in a power plant is tested. The results from both benchmark studies and real-world application show that FAMDDA-T and FAM-T are able to yield satisfactory classification performances, with the advantage of having parsimonious network structures.  相似文献   

5.
Although ordering-based pruning algorithms possess relatively high efficiency, there remains room for further improvement. To this end, this paper describes the combination of a dynamic programming technique with the ensemble-pruning problem. We incorporate dynamic programming into the classical ordering-based ensemble-pruning algorithm with complementariness measure (ComEP), and, with the help of two auxiliary tables, propose a reasonably efficient dynamic form, which we refer to as ComDPEP. To examine the performance of the proposed algorithm, we conduct a series of simulations on four benchmark classification datasets. The experimental results demonstrate the significantly higher efficiency of ComDPEP over the classic ComEP algorithm. The proposed ComDPEP algorithm also outperforms two other state-of-the-art ordering-based ensemble-pruning algorithms, which use uncertainty weighted accuracy and reduce-error pruning, respectively, as their measures. It is noteworthy that, the effectiveness of ComDPEP is just the same with that of the classical ComEP algorithm.  相似文献   

6.
A comparative analysis of methods for pruning decision trees   总被引:14,自引:0,他引:14  
In this paper, we address the problem of retrospectively pruning decision trees induced from data, according to a top-down approach. This problem has received considerable attention in the areas of pattern recognition and machine learning, and many distinct methods have been proposed in literature. We make a comparative study of six well-known pruning methods with the aim of understanding their theoretical foundations, their computational complexity, and the strengths and weaknesses of their formulation. Comments on the characteristics of each method are empirically supported. In particular, a wide experimentation performed on several data sets leads us to opposite conclusions on the predictive accuracy of simplified trees from some drawn in the literature. We attribute this divergence to differences in experimental designs. Finally, we prove and make use of a property of the reduced error pruning method to obtain an objective evaluation of the tendency to overprune/underprune observed in each method  相似文献   

7.
A critical issue in classification tree design-obtaining right-sized trees, i.e. trees which neither underfit nor overfit the data-is addressed. Instead of stopping rules to halt partitioning, the approach of growing a large tree with pure terminal nodes and selectively pruning it back is used. A new efficient iterative method is proposed to grow and prune classification trees. This method divides the data sample into two subsets and iteratively grows a tree with one subset and prunes it with the other subset, successively interchanging the roles of the two subsets. The convergence and other properties of the algorithm are established. Theoretical and practical considerations suggest that the iterative free growing and pruning algorithm should perform better and require less computation than other widely used tree growing and pruning algorithms. Numerical results on a waveform recognition problem are presented to support this view  相似文献   

8.
关联模式挖掘研究是数据挖掘研究领域的重要分支之一,旨在发现项集之间存在的关联或相关关系。然而,传统的基于支持度一可信度框架的挖掘方法存在着一些不足:一是会产生过多的模式(包括频繁项集和规则);二是挖掘出来的规则有些是用户不感兴趣的,无用的,甚至是错误的;所以在挖掘过程中能有效地对无用模式进行剪枝是必要的。将卡方分析引入到模式的相关性度量中,利用卡方检验对项集之间、规则前件与后件之间的相关性进行度量是一种有效的剪枝方法。实验结果分析表明,在支持度度量的基础上引入卡方检验可以有效地对非相关模式进行剪枝,从而减小频繁项集和规则的规模。  相似文献   

9.
根据灵敏度矩阵提出了一种简单的灵敏度定义,该定义反映了单个输入节点对整个网络性能的影响。进而,基于该灵敏度定义提出了神经网络输入层剪枝算法。最后,通过UCI机器学习数据库中的两个模式分类例子验证方法的有效性。  相似文献   

10.
Architecture selection is a very important aspect in the design of neural networks (NNs) to optimally tune performance and computational complexity. Sensitivity analysis has been used successfully to prune irrelevant parameters from feedforward NNs. This paper presents a new pruning algorithm that uses the sensitivity analysis to quantify the relevance of input and hidden units. A new statistical pruning heuristic is proposed, based on the variance analysis, to decide which units to prune. The basic idea is that a parameter with a variance in sensitivity not significantly different from zero, is irrelevant and can be removed. Experimental results show that the new pruning algorithm correctly prunes irrelevant input and hidden units. The new pruning algorithm is also compared with standard pruning algorithms.  相似文献   

11.
A parallel alpha-beta search algorithm called unsynchronized iteratively deepening parallel alpha-beta search is described. The algorithm's simple control strategy and strong performance in complicated positions make it a viable alternative to the principal variation splitting algorithm (PVSA). Processors independently carry out iteratively deepening searches on separate subsets of moves. The iterative deepening is unsynchronized, e.g. one processor may be in the middle of the fifth iteration while another is in the middle of the sixth. Narrow windows diminish the importance of backing-up a score to the root of the tree as quickly as possible (one of the principal objectives of the PVSA). Speedups measured on one, two, four, and eight chess-playing computers are reported  相似文献   

12.

Empirical studies on ensemble learning that combines multiple classifiers have shown that, it is an effective technique to improve accuracy and stability of a single classifier. In this paper, we propose a novel method of dynamically building diversified sparse ensembles. We first apply a technique known as the canonical correlation to model the relationship between the input data variables and output base classifiers. The canonical (projected) output classifiers and input training data variables are encoded globally through a multi-linear projection of CCA, to decrease the impacts of noisy input data and incorrect classifiers to a minimum degree in such a global view. Secondly, based on the projection, a sparse regression method is used to prune representative classifiers by combining classifier diversity measurement. Based on the above methods, we evaluate the proposed approach by several datasets, such as UCI and handwritten digit recognition. Experimental results of the study show that, the proposed approach achieves better accuracy as compared to other ensemble methods such as QFWEC, Simple Vote Rule, Random Forest, Drep and Adaboost.

  相似文献   

13.
Pruning a neural network to a reasonable smaller size, and if possible to give a better generalization, has long been investigated. Conventionally the common technique of pruning is based on considering error sensitivity measure, and the nature of the problem being solved is usually stationary. In this article, we present an adaptive pruning algorithm for use in a nonstationary environment. The idea relies on the use of the extended Kalman filter (EKF) training method. Since EKF is a recursive Bayesian algorithm, we define a weight-importance measure in term of the sensitivity of a posteriori probability. Making use of this new measure and the adaptive nature of EKF, we devise an adaptive pruning algorithm called adaptive Bayesian pruning. Simulation results indicate that in a noisy nonstationary environment, the proposed pruning algorithm is able to remove network redundancy adaptively and yet preserve the same generalization ability.  相似文献   

14.
In this paper forward-pruning methods, such as multi-cut and null move, are tested at so-called ALL nodes. We improved the principal variation search by four small but essential additions. The new PVS algorithm guarantees that forward pruning is safe at ALL nodes. Experiments show that multi-cut at ALL nodes (MC-A) when combined with other forward-pruning mechanisms give a significant reduction of the number of nodes searched. In comparison, a (more) aggressive version of the null move (variable null-move bound) gives less reduction at expected ALL nodes. Finally, it is demonstrated that the playing strength of the lines of action program MIA is significantly (scoring 21% more winning points than the opponent) increased by MC-A.  相似文献   

15.
16.
Identifying the optimal subset of regressors in a regression bagging ensemble is a difficult task that has exponential cost in the size of the ensemble. In this article we analyze two approximate techniques especially devised to address this problem. The first strategy constructs a relaxed version of the problem that can be solved using semidefinite programming. The second one is based on modifying the order of aggregation of the regressors. Ordered aggregation is a simple forward selection algorithm that incorporates at each step the regressor that reduces the training error of the current subensemble the most. Both techniques can be used to identify subensembles that are close to the optimal ones, which can be obtained by exhaustive search at a larger computational cost. Experiments in a wide variety of synthetic and real-world regression problems show that pruned ensembles composed of only 20% of the initial regressors often have better generalization performance than the original bagging ensembles. These improvements are due to a reduction in the bias and the covariance components of the generalization error. Subensembles obtained using either SDP or ordered aggregation generally outperform subensembles obtained by other ensemble pruning methods and ensembles generated by the Adaboost.R2 algorithm, negative correlation learning or regularized linear stacked generalization. Ordered aggregation has a slightly better overall performance than SDP in the problems investigated. However, the difference is not statistically significant. Ordered aggregation has the further advantage that it produces a nested sequence of near-optimal subensembles of increasing size with no additional computational cost.  相似文献   

17.
Many enhancements to the alpha-beta algorithm have been proposed to help reduce the size of minimax trees. A recent enhancement, the history heuristic, which improves the order in which branches are considered at interior nodes is described. A comprehensive set of experiments is reported which tries all combinations of enhancements to determine which one yields the best performance. In contrast, previous work on assessing their performance has concentrated on the benefits of individual enhancements or a few combinations. The aim is to find the combination that provides the greatest reduction in tree size. Results indicate that the history heuristic combined with transposition tables significantly outperforms other alpha-beta enhancements in application-generated game trees. For trees up to depth 8, this combination accounts for 99% of the possible reductions in tree size, with the other enhancements yielding insignificant gains  相似文献   

18.
An algorithm based on state space search is introduced for computing the minimax value of game trees. The new algorithm SSS1 is shown to be more efficient than α-ß in the sense that SSS1 never evaluates a node that α-ß can ignore. Moreover, for practical distributions of tip node values, SSS1 can expect to do strictly better than α-ß in terms of average number of nodes explored. In order to be more informed than α-ß, SSS1 sinks paths in parallel across the full breadth of the game tree. The penalty for maintaining these alternate search paths is a large increase in storage requirement relative to α-ß. Some execution time data is given which indicates that in some cases the tradeoff of storage for execution time may be favorable to SSS1.  相似文献   

19.
Collective-agreement-based pruning of ensembles   总被引:2,自引:0,他引:2  
Ensemble methods combine several individual pattern classifiers in order to achieve better classification. The challenge is to choose the minimal number of classifiers that achieve the best performance. An ensemble that contains too many members might incur large storage requirements and even reduce the classification performance. The goal of ensemble pruning is to identify a subset of ensemble members that performs at least as good as the original ensemble and discard any other members.In this paper, we introduce the Collective-Agreement-based Pruning (CAP) method. Rather than ranking individual members, CAP ranks subsets by considering the individual predictive ability of each member along with the degree of redundancy among them. Subsets whose members highly agree with the class while having low inter-agreement are preferred.  相似文献   

20.
Pattern Analysis and Applications - One of the crucial problems of designing a classifier ensemble is the proper choice of the base classifier line-up. Basically, such an ensemble is formed on the...  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号