首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 93 毫秒
1.
One of the most important paradigms in the inductive inference literature is that of robust learning. This paper adapts and investigates the paradigm of robust learning to learning languages from positive data. Broadening the scope of that paradigm is important: robustness captures a form of invariance of learnability under admissible transformations on the object of study; hence, it is a very desirable property. The key to defining robust learning of languages is to impose that the latter be automatic, that is, recognisable by a finite automaton. The invariance property used to capture robustness can then naturally be defined in terms of first-order definable operators, called translators. For several learning criteria amongst a selection of learning criteria investigated either in the literature on explanatory learning from positive data or in the literature on query learning, we characterise the classes of languages all of whose translations are learnable under that criterion.  相似文献   

2.
Learning Automata from Ordered Examples   总被引:1,自引:1,他引:0  
Porat  Sara  Feldman  Jerome A. 《Machine Learning》1991,7(2-3):109-138
Connectionist learning models have had considerable empirical success, but it is hard to characterize exactly what they learn. The learning of finite-state languages (FSL) from example strings is a domain which has been extensively studied and might provide an opportunity to help understand connectionist learning. A major problem is that traditional FSL learning assumes the storage of all examples and thus violates connectionist principles. This paper presents a provably correct algorithm for inferring any minimum-state deterministic finite-state automata (FSA) from a complete ordered sample using limited total storage and without storing example strings. The algorithm is an iterative strategy that uses at each stage a current encoding of the data considered so far, and one single sample string. One of the crucial advantages of our algorithm is that the total amount of space used in the course of learning for encoding any finite prefix of the sample is polynomial in the size of the inferred minimum state deterministic FSA. The algorithm is also relatively efficient in time and has been implemented. More importantly, there is a connectionist version of the algorithm that preserves these properties. The connectionist version requires much more structure than the usual models and has been implemented using the Rochester Connectionist Simulator. We also show that no machine with finite working storage can iteratively identify the FSL from arbitrary presentations.  相似文献   

3.
In this paper we introduce a paradigm for learning in the limit of potentially infinite languages from all positive data and negative counterexamples provided in response to the conjectures made by the learner. Several variants of this paradigm are considered that reflect different conditions/constraints on the type and size of negative counterexamples and on the time for obtaining them. In particular, we consider the models where (1) a learner gets the least negative counterexample; (2) the size of a negative counterexample must be bounded by the size of the positive data seen so far; (3) a counterexample can be delayed. Learning power, limitations of these models, relationships between them, as well as their relationships with classical paradigms for learning languages in the limit (without negative counterexamples) are explored. Several surprising results are obtained. In particular, for Gold's model of learning requiring a learner to syntactically stabilize on correct conjectures, learners getting negative counterexamples immediately turn out to be as powerful as the ones that do not get them for indefinitely (but finitely) long time (or are only told that their latest conjecture is not a subset of the target language, without any specific negative counterexample). Another result shows that for behaviorally correct learning (where semantic convergence is required from a learner) with negative counterexamples, a learner making just one error in almost all its conjectures has the “ultimate power”: it can learn the class of all recursively enumerable languages. Yet another result demonstrates that sometimes positive data and negative counterexamples provided by a teacher are not enough to compensate for full positive and negative data.  相似文献   

4.
This paper describes a methodology for the specification and analysis of distributed real-time systems using the toolset called PARAGON. PARAGON is based on the Communicating Shared Resources paradigm, which allows a real-time system to be modeled as a set of communicating processes that compete for shared resources. PARAGON supports both visual and textual languages for describing real-time systems. It offers automatic analysis based on state space exploration as well as user-directed simulation. Our experience with using PARAGON in several case studies resulted in a methodology that includes design patterns and abstraction heuristics, as well as an overall process. This paper briefly overviews the communicating shared resource paradigm and its toolset PARAGON, including the textual and visual specification languages. The paper then describes our methodology with special emphasis on heuristics that can be used in PARAGON to reduce the state space. To illustrate the methodology, we use examples from a real-life system case study. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

5.
A team of learning machines is a multiset of learning machines. A team is said to learn a concept successfully if each member of some nonempty subset, of predetermined size, of the team learns the concept. Team learning of languages may be viewed as a suitable theoretical model for studying computational limits on the use of multiple heuristics in learning from examples. Team learning of recursively enumerable languages has been studied extensively. However, it may be argued that from a practical point of view all languages of interest are computable. This paper gives theoretical results about team learnability of computable (recursive) languages. These results are mainly about two issues: redundancy and aggregation. The issue of redundancy deals with the impact of increasing the size of a team and increasing the number of machines required to be successful. The issue of aggregation deals with conditions under which a team may be replaced by a single machine without any loss in learning ability. The learning scenarios considered are: (a) Identification in the limit of grammars for computable languages. (b) Identification in the limit of decision procedures for computable languages. (c) Identification in the limit of grammars for indexed families of computable languages. (d) Identification in the limit of grammars for indexed families with a recursively enumerable class of grammars for the family as the hypothesis space. Scenarios that can be modeled by team learning are also presented. Received March 1998, and in final form January 1999.  相似文献   

6.
We propose a computing model, the Two-Way Optical Interference Automata (2OIA), that makes use of the phenomenon of optical interference. We introduce this model to investigate the increase in power, in terms of language recognition, of a classical Deterministic Finite Automaton (DFA) when endowed with the facility of interference. The question is in the spirit of Two-Way Finite Automata With Quantum and Classical States (2QCFA) [A. Ambainis, J. Watrous, Two-way finite automata with quantum and classical states, Theoret. Comput. Sci. 287 (1) (2002) 299–311] wherein the classical DFA is augmented with a quantum component of constant size. We test the power of 2OIA against the languages mentioned in the above paper. We give efficient 2OIA algorithms to recognize languages for which 2QCFA machines have been shown to exist, as well as languages whose status vis-a-vis 2QCFA has been posed as open questions. Having a DFA as a component, it trivially recognizes regular languages. We show that our model can recognize all languages recognized by 1-way deterministic blind counter automata. Finally we show the existence of a language that cannot be recognized by a 2OIA but which can be recognized by an O(n3)O(n3) space Turing machine.  相似文献   

7.
We analyze the properties of probabilistic reversible decide-and-halt automata (DH-PRA) and show that there is a strong relationship between DH-PRA and 1-way quantum automata. We show that a general class of regular languages is not recognizable by DH-PRA by proving that two “forbidden” constructions in minimal deterministic automata correspond to languages not recognizable by DH-PRA. The shown class is identical to a class known to be not recognizable by 1-way quantum automata. We also prove that the class of languages recognizable by DH-PRA is not closed under union and other non-trivial Boolean operations.  相似文献   

8.
We develop theory on the efficiency of identifying (learning) timed automata. In particular, we show that: (i) deterministic timed automata cannot be identified efficiently in the limit from labeled data and (ii) that one-clock deterministic timed automata can be identified efficiently in the limit from labeled data. We prove these results based on the distinguishability of these classes of timed automata. More specifically, we prove that the languages of deterministic timed automata cannot, and that one-clock deterministic timed automata can be distinguished from each other using strings in length bounded by a polynomial. In addition, we provide an algorithm that identifies one-clock deterministic timed automata efficiently from labeled data.Our results have interesting consequences for the power of clocks that are interesting also out of the scope of the identification problem.  相似文献   

9.
Synchronous programming is available through several formally defined languages having very different characteristics: Esterel is imperative, while Lustre and Signal are declarative in style; Statecharts and Argos are graphical languages that allow one to program by constructing hierarchical automata. Our motivation for taking the synchronous design paradigm further, integrating imperative, declarative (or dataflow), and graphical programming styles, is that real systems typically have components that match each of these profiles. This paper motivates our interest in the mixed language programming of embedded software around a number of examples, and sketches the semantical foundation of the Synchronie toolset which ensures a coherent computational model. This toolset supports a design trajectory that incorporates rapid prototyping and systematic testing for early design validation, an object oriented development methodology for long term software management, and formal verification at the level of automatically generated object code.  相似文献   

10.
A recent paper by Bouajjani, Muscholl and Touili shows that the class of languages accepted by partially ordered word automata (or equivalently accepted by Σ2-formulae) is closed under semi-commutation and it suggested the following open question: can we extend this result to tree languages? This problem can be addressed by proving (1) that the class of tree regular languages accepted by Σ2 formulae is strictly included in the class of languages accepted by partially ordered automata, and (2) that Bouajjani and the others results cannot be extended to tree.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号