首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Not all instances in a data set are equally beneficial for inferring a model of the data, and some instances (such as outliers) can be detrimental. Several machine learning techniques treat the instances in a data set differently during training such as curriculum learning, filtering, and boosting. However, it is difficult to determine how beneficial an instance is for inferring a model of the data. In this article, we present an automated method that orders the instances in a data set by complexity based on their likelihood of being misclassified (instance hardness) for supervised classification problems that generates a hardness ordering. The underlying assumption of this method is that instances with a high likelihood of being misclassified represent more complex concepts in a data set. Using a hardness ordering allows a learning algorithm to focus on the most beneficial instances. We integrate a hardness ordering into the learning process using curriculum learning, filtering, and boosting. We find that focusing on the simpler instances during training significantly increases generalization accuracy. Also, the effects of curriculum learning depend on the learning algorithm that is used. In general, filtering and boosting outperform curriculum learning, and filtering has the most significant effect on accuracy. © 2014 Wiley Periodicals, Inc.  相似文献   

2.
Complexity of Hard-Core Set Proofs   总被引:1,自引:1,他引:0  
We study a fundamental result of Impagliazzo (FOCS’95) known as the hard-core set lemma. Consider any function f:{0,1}n?{0,1}{f:\{0,1\}^n\to\{0,1\}} which is “mildly hard”, in the sense that any circuit of size s must disagree with f on at least a δ fraction of inputs. Then, the hard-core set lemma says that f must have a hard-core set H of density δ on which it is “extremely hard”, in the sense that any circuit of size s¢=O(s/(\frac1e2log(\frac1ed))){s'=O(s/(\frac{1}{\epsilon^2}\log(\frac{1}{\epsilon\delta})))} must disagree with f on at least (1-e)/2{(1-\epsilon)/2} fraction of inputs from H.  相似文献   

3.
Intersection-closed classes of concepts arise naturally in many contexts and have been intensively studied in computational learning theory. In this paper, we study intersection-closed classes that contain the concepts invariant under an operation satisfying a certain algebraic condition. We give a learning algorithm in the exact model with equivalence queries for such classes. This algorithm utilizes a novel encoding scheme, which we call a signature.  相似文献   

4.
In this paper, we describe how a basic strategy from computational learning theory can be used to attack a class of NP‐hard combinatorial optimization problems. It turns out that the learning strategy can be used as an iterative booster: given a solution to the combinatorial problem, we will start an efficient simulation of a learning algorithm which has a “good chance” to output an improved solution. This boosting technique is a new and surprisingly simple application of an existing learning strategy. It yields a novel heuristic approach to attack NP‐hard optimization problems. It does not apply to each combinatorial problem, but we are able to exactly formalize some sufficient conditions. The new technique applies, for instance, to the problems of minimizing a deterministic finite automaton relative to a given domain, the analogous problem for ordered binary decision diagrams, and to graph coloring. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

5.
文章分析了FOIL(first-order inductive)递归谓词学习算法理论上的不足以及由此导致的应用范围的局限,并通过两个例子给予详细说明.为了克服这一缺陷,文章引入了反映递归规则集R与实例空间E本质关系的实例图H(R,E)和实例序的概念,奠定了算法的理论基础.在此基础上,给出了基于实例图的FOILPlus算法.算法通过对悬例、悬弧的操作把握住实例序,自然而然的防止了病态递归规则的产生,从而保证FOILPlus可以不受常量序限制地完成学习任务;同时,算法的时空复杂度较之FOIL算法没有增加.FOILPlus算法已经编程实现,并用它尝试了两个FOIL学习失败的递归任务,都获得了成功.  相似文献   

6.
Working in the framework of PAC-learning theory, we present special statistics for accomplishing in polynomial time proper learning of DNF boolean formulas having a fixed number of monomials. Our statistics turn out to be near sufficient for a large family of distribution laws – that we call butterfly distributions. We develop a theory of most powerful learning for analyzing the performance of learning algorithms, with particular reference to trade-offs between power and computational costs. Focusing attention on sample and time complexity, we prove that our algorithm works as efficiently as the best algorithms existing in the literature – while the latter only take care of subclasses of our family of distributions.  相似文献   

7.

In this paper, we propose a new feature selection method called kernel fisher discriminant analysis and regression learning based algorithm for unsupervised feature selection. The existing feature selection methods are based on either manifold learning or discriminative techniques, each of which has some shortcomings. Although some studies show the advantages of two-steps method benefiting from both manifold learning and discriminative techniques, a joint formulation has been shown to be more efficient. To do so, we construct a global discriminant objective term of a clustering framework based on the kernel method. We add another term of regression learning into the objective function, which can impose the optimization to select a low-dimensional representation of the original dataset. We use L2,1-norm of the features to impose a sparse structure upon features, which can result in more discriminative features. We propose an algorithm to solve the optimization problem introduced in this paper. We further discuss convergence, parameter sensitivity, computational complexity, as well as the clustering and classification accuracy of the proposed algorithm. In order to demonstrate the effectiveness of the proposed algorithm, we perform a set of experiments with different available datasets. The results obtained by the proposed algorithm are compared against the state-of-the-art algorithms. These results show that our method outperforms the existing state-of-the-art methods in many cases on different datasets, but the improved performance comes with the cost of increased time complexity.

  相似文献   

8.
We give a generic construction of an optimal hitting set generator (HSG) from any good “reconstructive” disperser. Past constructions of optimal HSGs have been based on such disperser constructions, but have had to modify the construction in a complicated way to meet the stringent efficiency requirements of HSGs. The construction in this paper uses existing disperser constructions with the “easiest” parameter setting in a black-box fashion to give new constructions of optimal HSGs without any additional complications. Our results show that a straightforward composition of the Nisan-Wigderson pseudorandom generator that is similar to the composition in works by Impagliazzo, Shaltiel and Wigderson in fact yields optimal HSGs (in contrast to the “near-optimal” HSGs constructed in those works). Our results also give optimal HSGs that do not use any form of hardness amplification or implicit list-decoding—like Trevisan’s extractor, the only ingredients are combinatorial designs and any good list-decodable error-correcting code. A preliminary version of this paper appeared in RANDOM 2005. Supported by NSF CCF-0346991, BSF 2004329, a Sloan Research Fellowship, and an Okawa Foundation research grant.  相似文献   

9.
The approach of learning multiple “related” tasks simultaneously has proven quite successful in practice; however, theoretical justification for this success has remained elusive. The starting point for previous work on multiple task learning has been that the tasks to be learned jointly are somehow “algorithmically related”, in the sense that the results of applying a specific learning algorithm to these tasks are assumed to be similar. We offer an alternative approach, defining relatedness of tasks on the basis of similarity between the example generating distributions that underlie these tasks. We provide a formal framework for this notion of task relatedness, which captures a sub-domain of the wide scope of issues in which one may apply a multiple task learning approach. Our notion of task similarity is relevant to a variety of real life multitask learning scenarios and allows the formal derivation of generalization bounds that are strictly stronger than the previously known bounds for both the learning-to-learn and the multitask learning scenarios. We give precise conditions under which our bounds guarantee generalization on the basis of smaller sample sizes than the standard single-task approach. Editors: Daniel Silver, Kristin Bennett, Richard Caruana. A preliminary version of this paper appears in the proceedings of COLT’03, (Ben-David and Schuller 2003).  相似文献   

10.
The theory of average case complexity studies the expected complexity of computational tasks under various specific distributions on the instances, rather than their worst case complexity. Thus, this theory deals with distributional problems, defined as pairs each consisting of a decision problem and a probability distribution over the instances. While for applications utilizing hardness, such as cryptography, one seeks an efficient algorithm that outputs random instances of some problem that are hard for any algorithm with high probability, the resulting hard distributions in these cases are typically highly artificial, and do not establish the hardness of the problem under “interesting” or “natural” distributions. This paper studies the possibility of proving generic hardness results (i.e., for a wide class of NP{\mathcal{NP}} -complete problems), under “natural” distributions. Since it is not clear how to define a class of “natural” distributions for general NP{\mathcal{NP}} -complete problems, one possibility is to impose some strong computational constraint on the distributions, with the intention of this constraint being to force the distributions to “look natural”. Levin, in his seminal paper on average case complexity from 1984, defined such a class of distributions, which he called P-computable distributions. He then showed that the NP{\mathcal{NP}} -complete Tiling problem, under some P-computable distribution, is hard for the complexity class of distributional NP{\mathcal{NP}} problems (i.e. NP{\mathcal{NP}} with P-computable distributions). However, since then very few NP{\mathcal{NP}}- complete problems (coupled with P-computable distributions), and in particular “natural” problems, were shown to be hard in this sense. In this paper we show that all natural NP{\mathcal{NP}} -complete problems can be coupled with P-computable distributions such that the resulting distributional problem is hard for distributional NP{\mathcal{NP}}.  相似文献   

11.
Schapire and Singer's improved version of AdaBoost for handling weak hypotheses with confidence rated predictions represents an important advance in the theory and practice of boosting. Its success results from a more efficient use of information in weak hypotheses during updating. Instead of simple binary voting a weak hypothesis is allowed to vote for or against a classification with a variable strength or confidence. The Pool Adjacent Violators (PAV) algorithm is a method for converting a score into a probability. We show how PAV may be applied to a weak hypothesis to yield a new weak hypothesis which is in a sense an ideal confidence rated prediction and that this leads to an optimal updating for AdaBoost. The result is a new algorithm which we term PAV-AdaBoost. We give several examples illustrating problems for which this new algorithm provides advantages in performance. Editor: Robert Schapire  相似文献   

12.
Multi-Class Learning by Smoothed Boosting   总被引:1,自引:0,他引:1  
AdaBoost.OC has been shown to be an effective method in boosting “weak” binary classifiers for multi-class learning. It employs the Error-Correcting Output Code (ECOC) method to convert a multi-class learning problem into a set of binary classification problems, and applies the AdaBoost algorithm to solve them efficiently. One of the main drawbacks with the AdaBoost.OC algorithm is that it is sensitive to the noisy examples and tends to overfit training examples when they are noisy. In this paper, we propose a new boosting algorithm, named “MSmoothBoost”, which introduces a smoothing mechanism into the boosting procedure to explicitly address the overfitting problem with AdaBoost.OC. We proved the bounds for both the empirical training error and the marginal training error of the proposed boosting algorithm. Empirical studies with seven UCI datasets and one real-world application have indicated that the proposed boosting algorithm is more robust and effective than the AdaBoost.OC algorithm for multi-class learning. Editor: Nicolo Cesa-Bianchi  相似文献   

13.
In the l -phylogeny problem, one wishes to construct an evolutionary tree for a set of species represented by characters, in which each state of each character induces no more than l connected components. We consider the fixed-topology version of this problem for fixed-topologies of arbitrary degree. This version of the problem is known to be NP -complete for l ≥ 3 even for degree-3 trees in which no state labels more than l+1 leaves (and therefore there is a trivial l + 1 phylogeny). We give a 2 -approximation algorithm for all l ≥ 3 for arbitrary input topologies and we give an optimal approximation algorithm that constructs a 4 -phylogeny when a 3 -phylogeny exists. Dynamic programming techniques, which are typically used in fixed-topology problems, cannot be applied to l -phylogeny problems. Our 2 -approximation algorithm is the first application of linear programming to approximation algorithms for phylogeny problems. We extend our results to a related problem in which characters are polymorphic. Received June 9, 1997; revised March 13, 1998.  相似文献   

14.
Accurate prediction of the generalization ability of a learning algorithm is an important problem in computational learning theory. The classical Vapnik-Chervonenkis (VC) generalization bounds are too general and therefore overestimate the expected error. Recently obtained data-dependent bounds are still overestimated. To find out why the bounds are loose, we reject the uniform convergence principle and apply a purely combinatorial approach that is free of any probabilistic assumptions, makes no approximations, and provides an empirical control of looseness. We introduce new data-dependent complexity measures: a local shatter coefficient and a nonscalar local shatter profile, which can give much tighter bounds than the classical VC shatter coefficient. An experiment on real datasets shows that the effective local measures may take very small values; thus, the effective local VC dimension takes values in [0, 1] and therefore is not related to the dimension of the space. Konstantin Vorontsov. Born 1971. Graduated from the Faculty of Control and Applied Mathematics, Moscow Institute of Physics and Technology, in 1994. Received candidate’s degree in 1999. Currently is with the Dorodnicyn Computing Centre, Russian Academy of Sciences. Deputy director for research of Forecsys company (). Scientific interests: computational learning theory, machine learning, data mining, probability theory, and combinatorics. Author of 40 papers. Homepage:  相似文献   

15.
We give the first polynomial time algorithm to learn any function of a constant number of halfspaces under the uniform distribution on the Boolean hypercube to within any constant error parameter. We also give the first quasipolynomial time algorithm for learning any Boolean function of a polylog number of polynomial-weight halfspaces under any distribution on the Boolean hypercube. As special cases of these results we obtain algorithms for learning intersections and thresholds of halfspaces. Our uniform distribution learning algorithms involve a novel non-geometric approach to learning halfspaces; we use Fourier techniques together with a careful analysis of the noise sensitivity of functions of halfspaces. Our algorithms for learning under any distribution use techniques from real approximation theory to construct low-degree polynomial threshold functions. Finally, we also observe that any function of a constant number of polynomial-weight halfspaces can be learned in polynomial time in the model of exact learning from membership and equivalence queries.  相似文献   

16.
In previous work [V. Biazzo, A. Gilio, T. Lukasiewicz and G. Sanfilippo, Probabilistic logic under coherence, model-theoretic probabilistic logic, and default reasoning in System P, Journal of Applied Non-Classical Logics 12(2) (2002) 189–213.], we have explored the relationship between probabilistic reasoning under coherence and model-theoretic probabilistic reasoning. In particular, we have shown that the notions of g-coherence and of g-coherent entailment in probabilistic reasoning under coherence can be expressed by combining notions in model-theoretic probabilistic reasoning with concepts from default reasoning. In this paper, we continue this line of research. Based on the above semantic results, we draw a precise picture of the computational complexity of probabilistic reasoning under coherence. Moreover, we introduce transformations for probabilistic reasoning under coherence, which reduce an instance of deciding g-coherence or of computing tight intervals under g-coherent entailment to a smaller problem instance, and which can be done very efficiently. Furthermore, we present new algorithms for deciding g-coherence and for computing tight intervals under g-coherent entailment, which reformulate previous algorithms using terminology from default reasoning. They are based on reductions to standard problems in model-theoretic probabilistic reasoning, which in turn can be reduced to linear optimization problems. Hence, efficient techniques for model-theoretic probabilistic reasoning can immediately be applied for probabilistic reasoning under coherence (for example, column generation techniques). We describe several such techniques, which transform problem instances in model-theoretic probabilistic reasoning into smaller problem instances. We also describe a technique for obtaining a reduced set of variables for the associated linear optimization problems in the conjunctive case, and give new characterizations of this reduced set as a set of non-decomposable variables, and using the concept of random gain. This paper is a substantially extended and revised version of a preliminary paper that appeared in: Proceedings of the Second International Symposium on Imprecise Probabilities and Their Applications (ISIPTA '01), pp. 51–61, 2001.  相似文献   

17.
Two planar figures aresimilar if a scaled version of one of them can be moved so that it coincides with the second figure. The problem of checking whether two planar figures are similar is relevant to both computational geometry and pattern recognition. An efficient algorithm is known for checking whether two polygonsP andQ are similar(1) The purpose of this note is to give an efficient algorithm for checking whether two planar figuresP andQ are similar when the figures are no longer constrained to be polygons. We give anO(n logn) time algorithm for solving this problem when each figure consists of a collection of (possibly intersecting) straight line segments, circles, and ellipses. Our algorithm can easily be modified for figures which include other geometric patterns as well. We also prove that our algorithm is optimal.This work was partially supported by the Office of Naval Research under Contract N00014-84-K-0502.  相似文献   

18.
We consider the following polynomial congruences problem: given a prime p, and a set of polynomials of total degree at most d, solve the system for solution(s) in . We give a randomized algorithm for the decision version of this problem. For a fixed number of variables, the sequential version of the algorithm has expected time complexity polynomial in d, m and log p; the parallel version has expected time complexity polylogarithmic in d, m and p, using a number of processors, polynomial in d, m and log p. The only point which prevents the algorithm from being deterministic is the lack of a deterministic polynomial time algorithm for factoring univariate polynomials over . Received: April 9, 1997.  相似文献   

19.
Branch & Reduce and dynamic programming on graphs of bounded treewidth are among the most common and powerful techniques used in the design of moderately exponential time exact algorithms for NP hard problems. In this paper we discuss the efficiency of simple algorithms based on combinations of these techniques. The idea behind these algorithms is very natural: If a parameter like the treewidth of a graph is small, algorithms based on dynamic programming perform well. On the other side, if the treewidth is large, then there must be vertices of high degree in the graph, which is good for branching algorithms. We give several examples of possible combinations of branching and programming which provide the fastest known algorithms for a number of NP hard problems. All our algorithms require non-trivial balancing of these two techniques. In the first approach the algorithm either performs fast branching, or if there is an obstacle for fast branching, this obstacle is used for the construction of a path decomposition of small width for the original graph. Using this approach we give the fastest known algorithms for Minimum Maximal Matching and for counting all 3-colorings of a graph. In the second approach the branching occurs until the algorithm reaches a subproblem with a small number of edges (and here the right choice of the size of subproblems is crucial) and then dynamic programming is applied on these subproblems of small width. We exemplify this approach by giving the fastest known algorithm to count all minimum weighted dominating sets of a graph. We also discuss how similar techniques can be used to design faster parameterized algorithms. A preliminary version of this paper appeared as Branching and Treewidth Based Exact Algorithms in the Proceedings of the 17th International Symposium on Algorithms and Computation (ISAAC 2006) [15]. Additional support by the Research Council of Norway.  相似文献   

20.
Naor and Naor [11] implicitly isolate an odd number of elements of a nonempty set of n -bit vectors. We perform a tighter analysis of their construction and obtain better probability bounds. Using this construction, we improve bounds on several results in complexity theory that originally used a construction due to Valiant and Vazirani [18]. In particular, we obtain better bounds on polynomials which approximate boolean functions; improve bounds on the running time of the ⊕ P machine in Toda's result [16]; and improve bounds on the size of threshold circuits accepting languages accepted by AC 0 circuits. Received July 1993, and in revised form January 1995, and in final form February 1997.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号