首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 453 毫秒
1.
This paper investigates the optimization problem when executing a join in a distributed database environment. The minimization of the communication cost for sending data through links has been adopted as an optimization criterion. We explore in this paper the approach of judiciously using join operations as reducers in distributed query processing. In general, this problem is computationally intractable. A restriction of the execution of a join in a pre-defined combinatorial order leads to a possible solution in polynomial time. An algorithm for a chain query computation has been proposed in [21]. The time complexity of the algorithm isO(m 2 n 2+m 3 n), wheren is the number of sites in the network, andm is the number of relations (fragments) involved in the join. In this paper, we firstly present a proof of the intuitively well understood fact—that the eigenorder of a chain join will be the best pre-defined combinatorial order to implement the algorithm in [21]. Secondly, we show a sufficient and necessary condition for a chain query with the eigenordering to be a simple query. For the process of the class of simple queries, we show a significant reduction of the time complexity fromO(m 2 n 2+m 3 n) toO(mn+m 2). It is encouraging that, in practice, the most frequent queries belong to the category of simple queries. Editor: Peter Apers  相似文献   

2.
Auer  Peter  Long  Philip M. 《Machine Learning》1999,36(3):147-181
We solve an open problem of Maass and Turán, showing that the optimal mistake-bound when learning a given concept class without membership queries is within a constant factor of the optimal number of mistakes plus membership queries required by an algorithm that can ask membership queries. Previously known results imply that the constant factor in our bound is best possible.We then show that, in a natural generalization of the mistake-bound model, the usefulness to the learner of arbitrary yes-no questions between trials is very limited. We show that several natural structural questions about relatives of the mistake-bound model can be answered through the application of this general result. Most of these results can be interpreted as saying that learning in apparently less powerful (and more realistic) models is not much more difficult than learning in more powerful models.  相似文献   

3.
We introduce a new model of a learner learning an unknown concept from examples with a teacher's help. In such models, outright coding refers to a situation in which the teacher sends the learner a representation of the concept, either directly or encoded via the examples. Previous models have used adversarial learners or adversarial teachers to try to prevent outright coding. Our model is an attempt to reflect more directly some of the reasons that outright coding is not a common mode of human learning.We model the learner as a Turing machine with oracle access to another programming system, called its function box. The programming system in its function box is initially unknown to the learner. The target concept is a partial recursive function and the goal of the learner is to find in its function box a function that is equal to or extends the target concept. We exhibit a class of learner/teacher pairs in which the learner can learn any partial recursive function, provided that the learner's function box is not too much slower than the teacher's. This result is shown not to hold if the learner's function box can contain an arbitrary acceptable programming system.  相似文献   

4.
Ben-David  Shai  Eiron  Nadav 《Machine Learning》1998,33(1):87-104
We study the self-directed (SD) learning model. In this model a learner chooses examples, guesses their classification and receives immediate feedback indicating the correctness of its guesses. We consider several fundamental questions concerning this model: the parameters of a task that determine the cost of learning, the computational complexity of a student, and the relationship between this model and the teacher-directed (TD) learning model. We answer the open problem of relating the cost of self-directed learning to the VC-dimension by showing that no such relation exists. Furthermore, we refute the conjecture that for the intersection-closed case, the cost of self-directed learning is bounded by the VC-dimension. We also show that the cost of SD learning may be arbitrarily higher that that of TD learning.Finally, we discuss the number of queries needed for learning in this model and its relationship to the number of mistakes the student incurs. We prove a trade-off formula showing that an algorithm that makes fewer queries throughout its learning process, necessarily suffers a higher number of mistakes.  相似文献   

5.
Lower Bound Methods and Separation Results for On-Line Learning Models   总被引:4,自引:4,他引:0  
Maass  Wolfgang  Turán  György 《Machine Learning》1992,9(2-3):107-145
We consider the complexity of concept learning in various common models for on-line learning, focusing on methods for proving lower bounds to the learning complexity of a concept class. Among others, we consider the model for learning with equivalence and membership queries. For this model we give lower bounds on the number of queries that are needed to learn a concept class in terms of the Vapnik-Chervonenkis dimension of , and in terms of the complexity of learning with arbitrary equivalence queries. Furthermore, we survey other known lower bound methods and we exhibit all known relationships between learning complexities in the models considered and some relevant combinatorial parameters. As it turns out, the picture is almost complete. This paper has been written so that it can be read without previous knowledge of Computational Learning Theory.  相似文献   

6.
This paper investigates what happens when a learning algorithm for a classC attempts to learn target formulas from a different class. In many cases, the learning algorithm will find a bad attribute or a property of the target formula which precludes its membership in the classC. To continue the learning process, we proceed by building a decision tree according to the possible values of this attribute (divide) and recursively run the learning algorithm for each value (conquer). This paper shows how to recursively run the learning algorithm for each value using the oracles of the target.We demonstrate that the application of this idea on some known learning algorithms can both simplify the algorithm and provide additional power to learn more classes. In particular, we give a simple exact learning algorithm, using membership and equivalence queries, for the class of DNF that is almost unate, that is, unate with the addition ofO (logn) nonunate variables and a constant number of terms. We also find algorithms in different models for boolean functions that depend onk terms.  相似文献   

7.
8.
Servedio  R. 《Machine Learning》2002,47(2-3):133-151
We describe a novel family of PAC model algorithms for learning linear threshold functions. The new algorithms work by boosting a simple weak learner and exhibit sample complexity bounds remarkably similar to those of known online algorithms such as Perceptron and Winnow, thus suggesting that these well-studied online algorithms in some sense correspond to instances of boosting. We show that the new algorithms can be viewed as natural PAC analogues of the online p-norm algorithms which have recently been studied by Grove, Littlestone, and Schuurmans (1997, Proceedings of the Tenth Annual Conference on Computational Learning Theory (pp. 171–183) and Gentile and Littlestone (1999, Proceedings of the Twelfth Annual Conference on Computational Learning Theory (pp. 1–11). As special cases of the algorithm, by taking p = 2 and p = we obtain natural boosting-based PAC analogues of Perceptron and Winnow respectively. The p = case of our algorithm can also be viewed as a generalization (with an improved sample complexity bound) of Jackson and Craven's PAC-model boosting-based algorithm for learning sparse perceptrons (Jackson & Craven, 1996, Advances in neural information processing systems 8, MIT Press). The analysis of the generalization error of the new algorithms relies on techniques from the theory of large margin classification.  相似文献   

9.
Through key examples and constructs, exact and approximate, complexity, computability, and solution of linear programming systems are reexamined in the light of Khachian's new notion of (approximate) solution. Algorithms, basic theorems, and alternate representations are reviewed. It is shown that the Klee-Minty example hasnever been exponential for (exact) adjacent extreme point algorithms and that the Balinski-Gomory (exact) algorithm continues to be polynomial in cases where (approximate) ellipsoidal centered-cutoff algorithms (Levin, Shor, Khachian, Gacs-Lovasz) are exponential. By model approximation, both the Klee-Minty and the new J. Clausen examples are shown to be trivial (explicitly solvable) interval programming problems. A new notion of computable (approximate) solution is proposed together with ana priori regularization for linear programming systems. New polyhedral constraint contraction algorithms are proposed for approximate solution and the relevance of interval programming for good starts or exact solution is brought forth. It is concluded from all this that the imposed problem ignorance of past complexity research is deleterious to research progress on computability or efficiency of computation.This research was partly supported by Project NR047-071, ONR Contract N00014-80-C-0242, and Project NR047-021, ONR Contract N00014-75-C-0569, with the Center for Cybernetic Studies, The University of Texas at Austin.  相似文献   

10.
This paper presents aut, a modern Automath checker. It is a straightforward re-implementation of the Zandleven Automath checker from the seventies. It was implemented about five years ago, in the programming language C. It accepts both the AUT-68 and AUT-QE dialects of Automath. This program was written to restore a damaged version of Jutting's translation of Landau's Grundlagen. Some notable features: It is fast. On a 1 GHz machine it will check the full Jutting formalization (736 K of nonwhitespace Automath source) in 0.6 seconds. Its implementation of -terms does not use named variables or de Bruijn indices (the two common approaches) but instead uses a graph representation. In this representation variables are represented by pointers to a binder. The program can compile an Automath text into one big Automath single line-style -term. It outputs such a term using de Bruijn indices. (These -terms cannot be checked by modern systems like Coq or Agda, because the -typed -calculi of de Bruijn are different from the -typed -calculi of modern type theory.)The source of aut is freely available on the Web at the address .  相似文献   

11.
Certain tasks, such as formal program development and theorem proving, fundamentally rely upon the manipulation of higher-order objects such as functions and predicates. Computing tools intended to assist in performing these tasks are at present inadequate in both the amount of knowledge they contain (i.e., the level of support they provide) and in their ability to learn (i.e., their capacity to enhance that support over time). The application of a relevant machine learning technique—explanation-based generalization (EBG)—has thus far been limited to first-order problem representations. We extend EBG to generalize higher-order values, thereby enabling its application to higher-order problem encodings.Logic programming provides a uniform framework in which all aspects of explanation-based generalization and learning may be defined and carried out. First-order Horn logics (e.g., Prolog) are not, however, well suited to higher-order applications. Instead, we employ Prolog, a higher-order logic programming language, as our basic framework for realizing higher-order EBG. In order to capture the distinction between domain theory and training instance upon which EBG relies, we extend Prolog with the necessity operator of modal logic. We develop a meta-interpreter realizing EBG for the extended language, Prolog, and provide examples of higher-order EBG.  相似文献   

12.
Auer  Peter  Long  Philip M.  Maass  Wolfgang  Woeginger  Gerhard J. 《Machine Learning》1995,18(2-3):187-230
The majority of results in computational learning theory are concerned with concept learning, i.e. with the special case of function learning for classes of functions with range {0, 1}. Much less is known about the theory of learning functions with a larger range such as or . In particular relatively few results exist about the general structure of common models for function learning, and there are only very few nontrivial function classes for which positive learning results have been exhibited in any of these models.We introduce in this paper the notion of a binary branching adversary tree for function learning, which allows us to give a somewhat surprising equivalent characterization of the optimal learning cost for learning a class of real-valued functions (in terms of a max-min definition which does not involve any learning model).Another general structural result of this paper relates the cost for learning a union of function classes to the learning costs for the individual function classes.Furthermore, we exhibit an efficient learning algorithm for learning convex piecewise linear functions from d into . Previously, the class of linear functions from d into was the only class of functions with multidimensional domain that was known to be learnable within the rigorous framework of a formal model for online learning.Finally we give a sufficient condition for an arbitrary class of functions from into that allows us to learn the class of all functions that can be written as the pointwise maximum ofk functions from . This allows us to exhibit a number of further nontrivial classes of functions from into for which there exist efficient learning algorithms.  相似文献   

13.
In this article we give several new results on the complexity of algorithms that learn Boolean functions from quantum queries and quantum examples.
  Hunziker et al.[Quantum Information Processing, to appear] conjectured that for any class C of Boolean functions, the number of quantum black-box queries which are required to exactly identify an unknown function from C is , where is a combinatorial parameter of the class C. We essentially resolve this conjecture in the affirmative by giving a quantum algorithm that, for any class C, identifies any unknown function from C using quantum black-box queries.
  We consider a range of natural problems intermediate between the exact learning problem (in which the learner must obtain all bits of information about the black-box function) and the usual problem of computing a predicate (in which the learner must obtain only one bit of information about the black-box function). We give positive and negative results on when the quantum and classical query complexities of these intermediate problems are polynomially related to each other.
  Finally, we improve the known lower bounds on the number of quantum examples (as opposed to quantum black-box queries) required for ɛ, Δ-PAC learning any concept class of Vapnik-Chervonenkis dimension d over the domain from to . This new lower bound comes closer to matching known upper bounds for classical PAC learning.
Pacs: 03.67.Lx, 89.80.+h, 02.70.-c  相似文献   

14.
We present explanation-based learning (EBL) methods aimed at improving the performance of diagnosis systems integrating associational and model-based components. We consider multiple-fault model-based diagnosis (MBD) systems and describe two learning architectures. One, EBLIA, is a method for learning in advance. The other, EBL(p), is a method for learning while doing. EBLIA precompiles models into associations and relies only on the associations during diagnosis. EBL(p) performs compilation during diagnosis whenever reliance on previously learned associational rules results in unsatisfactory performance—as defined by a given performance threshold p. We present results of empirical studies comparing MBD without learning versus EBLIA and EBL(p). The main conclusions are as follows. EBLIA is superior when it is feasible, but it is not feasible for large devices. EBL(p) can speed-up MBD and scale-up to larger devices in situations where perfect accuracy is not required.  相似文献   

15.
Colearning in Differential Games   总被引:1,自引:0,他引:1  
Sheppard  John W. 《Machine Learning》1998,33(2-3):201-233
Game playing has been a popular problem area for research in artificial intelligence and machine learning for many years. In almost every study of game playing and machine learning, the focus has been on games with a finite set of states and a finite set of actions. Further, most of this research has focused on a single player or team learning how to play against another player or team that is applying a fixed strategy for playing the game. In this paper, we explore multiagent learning in the context of game playing and develop algorithms for co-learning in which all players attempt to learn their optimal strategies simultaneously. Specifically, we address two approaches to colearning, demonstrating strong performance by a memory-based reinforcement learner and comparable but faster performance with a tree-based reinforcement learner.  相似文献   

16.
In this paper, an objective conception of contexts based loosely upon situation theory is developed and formalized. Unlike subjective conceptions, which take contexts to be something like sets of beliefs, contexts on the objective conception are taken to be complex, structured pieces of the world that (in general) contain individuals, other contexts, and propositions about them. An extended first-order language for this account is developed. The language contains complex terms for propositions, and the standard predicate ist that expresses the relation that holds between a context and a proposition just in case the latter is true in the former. The logic for the objective conception features a global classical predicate calculus, a local logic for reasoning within contexts, and axioms for propositions. The specter of paradox is banished from the logic by allowing ist to be nonbivalent in problematic cases: it is not in general the case, for any context c and proposition p, that either ist(c,p) or ist(c, ¬ p). An important representational capability of the logic is illustrated by proving an appropriately modified version of an illustrative theorem from McCarthy's classic Blocks World example.  相似文献   

17.
When students use computers as learning tools, the whole process of learning, and, indeed, the learners themselves, are transformed. This article illustrates some techniques that foster transformative learning in computer-assisted first-year literature classes: first, a lesson plan on A Valediction: Forbidding Mourning that uses Microsoft Word functions, including format painter, tables, and annotation to explore meaning in context; second, a plan for learners to use subconference options in the Daedalus Interactive Writing Environment to analyze Oedipus Rex; finally, a demonstration of how students engage in a meta-reflection process as they explore Barn Burning with Freelance Graphics.Marguerite Jamieson is an English instructor at Anne Arundel Community College in Arnold, Maryland, and a doctoral student at George Mason University. Her research interests include forming bridges between adult learning theory and contemporary literary theory — especially drawing on transformational learning theory and the work of Mikhail Bakhtin and Lev Vygotsky.Rebecca Kajs holds a doctorate in English from Texas Woman's University with a concentration in rhetoric. For ten years, she taught the use of heuristic tools for reading analysis at the University of Texas at Arlington. She is currently an associate professor of English and Philosophy at Anne Arundel Community College.Anne Agee holds a doctorate in rhetoric from The Catholic University of America. A professor of English and formerly director of the Humanities Computer Center at Anne Arundel Community College, she is currently the college's Coordinator of Instructional Technology. Dr. Agee and Professor Jamieson have collaborated in a study of the learning environment in a computer classroom, the results of which were published in the Fall 1995 issue of Teaching/Learning Conversations. Dr. Agee has also published Using [Daedalus] InterChange as a Teachers' Journal in the Fall 1995 issue of Wings.  相似文献   

18.
We consider the Poisson equation with Dirichlet boundary conditions, in a domain , where n , and B is a collection of smooth open subsets (typically balls). The objective is to split the initial problem into two parts: a problem set in the whole domain , for which fast solvers can be used, and local subproblems set in narrow domains around the connected components of B, which can be solved in a fully parallel way. We shall present here a method based on a multi-domain formulation of the initial problem, which leads to a fixed point algorithm. The convergence of the algorithm is established, under some conditions on a relaxation parameter . The dependence of the convergence interval for upon the geometry is investigated. Some 2D computations based on a finite element discretization of both global and local problems are presented.  相似文献   

19.
Reliable and probably useful learning, proposed by Rivest and Sloan, is a variant of probably approximately correct learning. In this model the hypothesis must never misclassify an instance but is allowed to answer I don't know with a low probability. We derive upper and lower bounds for the sample complexity of reliable and probably useful learning in terms of the combinatorial characteristics of the concept class to be learned. This is done by reducing reliable and probably useful learning to learning with one-sided error. The bounds also hold for a slightly weaker model that allows the learner to output with a low probability a hypothesis that makes misclassifications. We see that in these models learning with one oracle is more difficult than learning with two oracles. Our results imply that monotone Boolean conjunctions or disjunctions cannot be learned reliably and probably usefully from a polynomial number of examples. Rectangles in n forn 2 cannot be learned from any finite number of examples.A preliminary version of this paper appeared under the title Reliable and useful learning inProceedings of the 2nd Annual Workshop on Computational Learning Theory, Morgan Kaufmann, San Mateo, CA, 1989, pp. 365–380. This work was supported by the Academy of Finland.  相似文献   

20.
The termF-cardinality of (=F-card()) is introduced whereF: n n is a partial function and is a set of partial functionsf: n n . TheF-cardinality yields a lower bound for the worst-case complexity of computingF if only functionsf can be evaluated by the underlying abstract automaton without conditional jumps. This complexity bound isindependent from the oracles available for the abstract machine. Thus it is shown that any automaton which can only apply the four basic arithmetic operations needs (n logn) worst-case time to sortn numbers; this result is even true if conditional jumps witharbitrary conditions are possible. The main result of this paper is the following: Given a total functionF: n n and a natural numberk, it is almost always possible to construct a set such that itsF-cardinality has the valuek; in addition, can be required to be closed under composition of functionsf,g . Moreover, ifF is continuous, then consists of continuous functions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号