首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper extends previous studies on learnability in non-acceptable numberings by considering the question: for which criteria which numberings are optimal, that is, for which numberings it holds that one can learn every learnable class using the given numbering as hypothesis space. Furthermore an effective version of optimality is studied as well. It is shown that the effectively optimal numberings for finite learning are just the acceptable numberings. In contrast to this, there are non-acceptable numberings which are optimal for finite learning and effectively optimal for explanatory, vacillatory and behaviourally correct learning. The numberings effectively optimal for explanatory learning are the K-acceptable numberings. A similar characterization is obtained for the numberings which are effectively optimal for vacillatory learning. Furthermore, it is studied which numberings are optimal for one and not for another criterion: among the criteria of finite, explanatory, vacillatory and behaviourally correct learning all separations can be obtained; however every numbering which is optimal for explanatory learning is also optimal for consistent learning.  相似文献   

2.
3.
In this paper we introduce a paradigm for learning in the limit of potentially infinite languages from all positive data and negative counterexamples provided in response to the conjectures made by the learner. Several variants of this paradigm are considered that reflect different conditions/constraints on the type and size of negative counterexamples and on the time for obtaining them. In particular, we consider the models where (1) a learner gets the least negative counterexample; (2) the size of a negative counterexample must be bounded by the size of the positive data seen so far; (3) a counterexample can be delayed. Learning power, limitations of these models, relationships between them, as well as their relationships with classical paradigms for learning languages in the limit (without negative counterexamples) are explored. Several surprising results are obtained. In particular, for Gold's model of learning requiring a learner to syntactically stabilize on correct conjectures, learners getting negative counterexamples immediately turn out to be as powerful as the ones that do not get them for indefinitely (but finitely) long time (or are only told that their latest conjecture is not a subset of the target language, without any specific negative counterexample). Another result shows that for behaviorally correct learning (where semantic convergence is required from a learner) with negative counterexamples, a learner making just one error in almost all its conjectures has the “ultimate power”: it can learn the class of all recursively enumerable languages. Yet another result demonstrates that sometimes positive data and negative counterexamples provided by a teacher are not enough to compensate for full positive and negative data.  相似文献   

4.
《Information and Computation》2006,204(8):1264-1294
The paper deals with the following problem: is returning to wrong conjectures necessary to achieve full power of algorithmic learning? Returning to wrong conjectures complements the paradigm of U-shaped learning when a learner returns to old correct conjectures. We explore our problem for classical models of learning in the limit from positive data: explanatory learning (when a learner stabilizes in the limit on a correct grammar) and behaviourally correct learning (when a learner stabilizes in the limit on a sequence of correct grammars representing the target concept). In both cases we show that returning to wrong conjectures is necessary to achieve full learning power. In contrast, one can modify learners (without losing learning power) such that they never show inverted U-shaped learning behaviour, that is, never return to old wrong conjecture with a correct conjecture in-between. Furthermore, one can also modify a learner (without losing learning power) such that it does not return to old “overinclusive” conjectures containing non-elements of the target language. We also consider our problem in the context of vacillatory learning (when a learner stabilizes on a finite number of correct grammars) and show that each of the following four constraints is restrictive (that is, reduces learning power): the learner does not return to old wrong conjectures; the learner is not inverted U-shaped; the learner does not return to old overinclusive conjectures; the learner does not return to old overgeneralizing conjectures. We also show that learners that are consistent with the input seen so far can be made decisive: on any text, they do not return to any old conjectures—wrong or right.  相似文献   

5.
In this paper we consider uncountable classes recognizable by ω-automata and investigate suitable learning paradigms for them. In particular, the counterparts of explanatory, vacillatory and behaviourally correct learning are introduced for this setting. Here the learner reads in parallel the data of a text for a language L from the class plus an ω-index α and outputs a sequence of ω-automata such that all but finitely many of these ω-automata accept the index α if and only if α is an index for L.It is shown that any class is behaviourally correct learnable if and only if it satisfies Angluin’s tell-tale condition. For explanatory learning, such a result needs that a suitable indexing of the class is chosen. On the one hand, every class satisfying Angluin’s tell-tale condition is vacillatorily learnable in every indexing; on the other hand, there is a fixed class such that the level of the class in the hierarchy of vacillatory learning depends on the indexing of the class chosen.We also consider a notion of blind learning. On the one hand, a class is blind explanatorily (vacillatorily) learnable if and only if it satisfies Angluin’s tell-tale condition and is countable; on the other hand, for behaviourally correct learning, there is no difference between the blind and non-blind version.This work establishes a bridge between the theory of ω-automata and inductive inference (learning theory).  相似文献   

6.
Different formal learning models address different aspects of human learning. Below we compare Gold-style learning—modelling learning as a limiting process in which the learner may change its mind arbitrarily often before converging to a correct hypothesis—to learning via queries—modelling learning as a one-shot process in which the learner is required to identify the target concept with just one hypothesis. In the Gold-style model considered below, the information presented to the learner consists of positive examples for the target concept, whereas in query learning, the learner may pose a certain kind of queries about the target concept, which will be answered correctly by an oracle (called teacher). Although these two approaches seem rather unrelated at first glance, we provide characterisations of different models of Gold-style learning (learning in the limit, conservative inference, and behaviourally correct learning) in terms of query learning. Thus we describe the circumstances which are necessary to replace limit learners by equally powerful one-shot learners. Our results are valid in the general context of learning indexable classes of recursive languages. This analysis leads to an important observation, namely that there is a natural query learning type hierarchically in-between Gold-style learning in the limit and behaviourally correct learning. Astonishingly, this query learning type can then again be characterised in terms of Gold-style inference.  相似文献   

7.
It is demonstrated that every context-free language is a homomorphic image of the intersection of two DOS languages and that every recursively enumerable language is the homomorphic image of the intersection of three DOS languages. It is also proved that by increasing the number of components in the intersections of DOS languages one gets an infinite hierarchy of classes of languages within the class of context-sensitive languages.  相似文献   

8.
This paper studies the nonterminal complexity of tree controlled grammars. It is proved that the number of nonterminals in tree controlled grammars without erasing rules leads to an infinite hierarchy of families of tree controlled languages, while every recursively enumerable language can be generated by a tree controlled grammar with erasing rules and at most nine nonterminals.  相似文献   

9.
This paper solves an important problem left open in the literature by showing that U-shapes are unnecessary in iterative learning from positive data. A U-shape occurs when a learner first learns, then unlearns, and, finally, relearns, some target concept. Iterative learning is a Gold-style learning model in which each of a learner’s output conjectures depends only upon the learner’s most recent conjecture and input element. Previous results had shown, for example, that U-shapes are unnecessary for explanatory learning, but are necessary for behaviorally correct learning. Work on the aforementioned problem led to the consideration of an iterative-like learning model, in which each of a learner’s conjectures may, in addition, depend upon the number of elements so far presented to the learner. Learners in this new model are strictly more powerful than traditional iterative learners, yet not as powerful as full explanatory learners. Can any class of languages learnable in this new model be learned without U-shapes? For now, this problem is left open.  相似文献   

10.
Gold introduced the notion of learning in the limit where a class S is learnable iff there is a recursive machine M which reads the course of values of a function f and converges to a program for f whenever f is in S. An important measure for the speed of convergence in this model is the quantity of mind changes before the onset of convergence. The oldest model is to consider a constant bound on the number of mind changes M makes on any input function; such a bound is referred here as type 1. Later this was generalized to a bound of type 2 where a counter ranges over constructive ordinals and is counted down at every mind change. Although ordinal bounds permit the inference of richer concept classes than constant bounds, they still are a severe restriction. Therefore the present work introduces two more general approaches to bounding mind changes. These are based on counting by going down in a linearly ordered set (type 3) and on counting by going down in a partially ordered set (type 4). In both cases the set must not contain infinite descending recursive sequences. These four types of mind changes yield a hierarchy and there are identifiable classes that cannot be learned with the most general mind change bound of type 4. It is shown that existence of type 2 bound is equivalent to the existence of a learning algorithm which converges on every (also nonrecursive) input function and the existence of type 4 is shown to be equivalent to the existence of a learning algorithm which converges on every recursive function. A partial characterization of type 3 yields a result of independent interest in recursion theory. The interplay between mind change complexity and choice of hypothesis space is investigated. It is established that for certain concept classes, a more expressive hypothesis space can sometimes reduce mind change complexity of learning these classes. The notion of mind change bound for behaviourally correct learning is indirectly addressed by employing the above four types to restrict the number of predictive errors of commission in finite error next value learning (NV′′)—a model equivalent to behaviourally correct learning. Again, natural characterizations for type 2 and type 4 bounds are derived. Their naturalness is further illustrated by characterizing them in terms of branches of uniformly recursive families of binary trees.  相似文献   

11.
In this paper, we prove that every recursively enumerable language can be generated by a scattered context grammar with no more than two context-sensitive productions.  相似文献   

12.
In this paper, we prove that every recursively enumerable language can be generated by a scattered context grammar with a reduced number of both nonterminals and context-sensing productions.  相似文献   

13.
A team of learning machines is a multiset of learning machines. A team is said to learn a concept successfully if each member of some nonempty subset, of predetermined size, of the team learns the concept. Team learning of languages may be viewed as a suitable theoretical model for studying computational limits on the use of multiple heuristics in learning from examples. Team learning of recursively enumerable languages has been studied extensively. However, it may be argued that from a practical point of view all languages of interest are computable. This paper gives theoretical results about team learnability of computable (recursive) languages. These results are mainly about two issues: redundancy and aggregation. The issue of redundancy deals with the impact of increasing the size of a team and increasing the number of machines required to be successful. The issue of aggregation deals with conditions under which a team may be replaced by a single machine without any loss in learning ability. The learning scenarios considered are: (a) Identification in the limit of grammars for computable languages. (b) Identification in the limit of decision procedures for computable languages. (c) Identification in the limit of grammars for indexed families of computable languages. (d) Identification in the limit of grammars for indexed families with a recursively enumerable class of grammars for the family as the hypothesis space. Scenarios that can be modeled by team learning are also presented. Received March 1998, and in final form January 1999.  相似文献   

14.
A computational model for learning languages in the limit from full positive data and a bounded number of queries to the teacher (oracle) is introduced and explored. Equivalence, superset, and subset queries are considered (for the latter one we consider also a variant when the learner tests every conjecture, but the number of negative answers is uniformly bounded). If the answer is negative, the teacher may provide a counterexample. We consider several types of counterexamples: arbitrary, least counterexamples, the ones whose size is bounded by the size of positive data seen so far, and no counterexamples. A number of hierarchies based on the number of queries (answers) and types of answers/counterexamples is established. Capabilities of learning with different types of queries are compared. In most cases, one or two queries of one type can sometimes do more than any bounded number of queries of another type. Still, surprisingly, a finite number of subset queries is sufficient to simulate the same number of equivalence queries when behaviourally correct learners do not receive counterexamples and may have unbounded number of errors in almost all conjectures.  相似文献   

15.
We consider a class of insertion–deletion systems which have not been investigated so far, those without any context controlling the insertion–deletion operations. Rather unexpectedly, we found that context-free insertion–deletion systems characterize the recursively enumerable languages. Moreover, this assertion is valid for systems with only one axiom, and also using inserted and deleted strings of a small length. As direct consequences of the main result we found that set-conditional insertion–deletion systems with two axioms generate any recursively enumerable language (this solves an open problem), as well as that membrane systems with one membrane having context-free insertion–deletion rules without conditional use of them generate all recursively enumerable languages (this improves an earlier result). Some open problems are also formulated.  相似文献   

16.
17.
In this paper, we compare the computability and complexity of a continuous real function F with the computability and complexity of the graph G of the function F. A similar analysis will be carried out for functions on subspaces of the real line such as the Cantor space, the Baire space and the unit interval. In particular, we define four basic types of effectively closed sets C depending on whether (i) the set of closed intervals which with nonempty intersection with C is recursively enumerable (r.e.), (ii) the set of closed intervals with empty intersection with C is r.e., (iii) the set of open intervals which with nonempty intersection with C is r.e., and (iv) the set of open intervals with empty intersection with C is r.e. We study the relationships between these four types of effectively closed sets in general and the relationships between these four types of effectively closed sets for closed sets which are graphs of continuous functions.  相似文献   

18.
Summary We study, first, the operation of quotient in connection with rational transductions. We show, afterwards, that Rocl, the family of one counter languages is closed under quotient by a context-free language. On the contrary, every recursively enumerable language is the quotient of two linear languages.  相似文献   

19.
This paper proves that every recursively enumerable language is generated by a scattered context grammar with no more than four nonterminals and three non-context-free productions. In its conclusion, it gives an overview of the results and open problems concerning scattered context grammars and languages.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号