首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 187 毫秒
1.
Resume Les notions de bicentre et bicentre strict d'un langage, définies par A. De Luca, A. Restivo et S. Salemi généralisent la notion de centre d'un langage définie par M. Nivat. L'objet du présent papier est de répondre á la question suivante lorsque désigne la famille des langages algébriques ou l'une de ses sous-familles classiques:Si L appartient à , le bicentre de L (respectivement le bicentre strict de L) appartient-il à ?Le principal résultat est une réponse positive à cette question lorsqu'il s'agit de la notion de bicentre et que est un full-AFL uniforme de langages algébriques.
Bicenters of context-free languages
Summary The notions of bicenter and strict bicenter of a language have been defined by A. De Luca, A. Restivo and S. Salemi and are a generalisation of the notion of center of a language, defined by M, Nivat. This paper deals with the following question, when is the family of context-free languages or one of its classical subfamilies:when L is in , is the bicenter (resp. the strict bicenter) of L also in ?Concerning the notion of bicenter, the main result of the paper is a positive answer when is a uniform full-AFL of context-free languages.
  相似文献   

2.
The simple rational partial functions accepted by generalized sequential machines are shown to coincide with the compositions P –1 , where P consists of the prefix codings. The rational functions accepted by generalized sequential machines are proved to coincide with the compositions P –1 , where is the family of endmarkers and is the family of removals of endmarkers. (The compositions are read from left to right). We also show that P –1 is the family of the subsequential functions.This work was partially supported by the Esprit Basic Research Action Working Group No. 3166 ASMICS, the CNRS and the Academy of Finland  相似文献   

3.
One major task in requirements specification is to capture the rules relevant to the problem at hand. Declarative, rule-based approaches have been suggested by many researchers in the field. However, when it comes to modeling large systems of rules, not only for the behavior of the computer system but also for the organizational environment surrounding it, current approaches have problems with limited expressiveness, flexibility, and poor comprehensibility. Hence, rule-based approaches may benefit from improvements in two directions: (1) improvement of the rule languages themselves and (2) better integration with other, complementary modeling approaches.In this article, both issues are addressed in an integrated manner. The proposal is presented in the context of the Tempora project on rule-based information systems development, but has also been integrated with PPP. Tempora has provided a rule language based on an executable temporal logic working on top of a temporal database. The rule language is integrated with static (ER-like) and dynamic (SA/RT-like) modeling approaches. In the current proposal, the integration with complementary modeling approaches is extended by including organization modeling (actors, roles), and the expressiveness of the rule language is increased by introducing deontic operators and rule hierarchies. The main contribution of the article is not seen as any one of the above-mentioned extensions, but as the resulting comprehensive modeling support. The approach is illustrated by examples taken from an industrial case study done in connection with Tempora.C. List of Symbols Subset of set - Not subset of set - Element of set - Not element of set - Equivalent to - Not equivalent to - ¬ Negation - Logical and - Logical or - Implication - Sometime in past - Sometime in future - Always in past - Always in future - Just before - Just after - u Until - s Since - Trigger - Condition - s State condition - Consequence - a Action - s State - Role - Actor - ¬ - General deontic operator - O Obligatory - R Recommended - P Permitted - D Discouraged - F Forbidden - (/–) General rule - t R Real time - t M Model time  相似文献   

4.
This paper considers the problem of quantifying literary style and looks at several variables which may be used as stylistic fingerprints of a writer. A review of work done on the statistical analysis of change over time in literary style is then presented, followed by a look at a specific application area, the authorship of Biblical texts.David Holmes is a Principal Lecturer in Statistics at the University of the West of England, Bristol with specific responsibility for co-ordinating the research programmes in the Department of Mathematical Sciences. He has taught literary style analysis to humanities students since 1983 and has published articles on the statistical analysis of literary style in theJournal of the Royal Statistical Society, History and Computing, andLiterary and Linguistic Computing. He presented papers at the ACH/ALLC conferences in 1991 and 1993.  相似文献   

5.
In this paper, we consider the linear interval tolerance problem, which consists of finding the largest interval vector included in ([A], [b]) = {x R n | A [A], b [b], Ax = b}. We describe two different polyhedrons that represent subsets of all possible interval vectors in ([A], [b]), and we provide a new definition of the optimality of an interval vector included in ([A], [b]). Finally, we show how the Simplex algorithm can be applied to find an optimal interval vector in ([A], [b]).  相似文献   

6.
Given a finite setE R n, the problem is to find clusters (or subsets of similar points inE) and at the same time to find the most typical elements of this set. An original mathematical formulation is given to the problem. The proposed algorithm operates on groups of points, called samplings (samplings may be called multiple centers or cores); these samplings adapt and evolve into interesting clusters. Compared with other clustering algorithms, this algorithm requires less machine time and storage. We provide some propositions about nonprobabilistic convergence and a sufficient condition which ensures the decrease of the criterion. Some computational experiments are presented.  相似文献   

7.
When verifying concurrent systems described by transition systems, state explosion is one of the most serious problems. If quantitative temporal information (expressed by clock ticks) is considered, state explosion is even more serious. We present a notion of abstraction of transition systems, where the abstraction is driven by the formulae of a quantitative temporal logic, called qu-mu-calculus, defined in the paper. The abstraction is based on a notion of bisimulation equivalence, called , n-equivalence, where is a set of actions and n is a natural number. It is proved that two transition systems are , n-equivalent iff they give the same truth value to all qu-mu-calculus formulae such that the actions occurring in the modal operators are contained in , and with time constraints whose values are less than or equal to n. We present a non-standard (abstract) semantics for a timed process algebra able to produce reduced transition systems for checking formulae. The abstract semantics, parametric with respect to a set of actions and a natural number n, produces a reduced transition system , n-equivalent to the standard one. A transformational method is also defined, by means of which it is possible to syntactically transform a program into a smaller one, still preserving , n-equivalence.  相似文献   

8.
Harnad's proposed robotic upgrade of Turing's Test (TT), from a test of linguistic capacity alone to a Total Turing Test (TTT) of linguisticand sensorimotor capacity, conflicts with his claim that no behavioral test provides even probable warrant for attributions of thought because there is no evidence of consciousness besides private experience. Intuitive, scientific, and philosophical considerations Harnad offers in favor of his proposed upgrade are unconvincing. I agree with Harnad that distinguishing real from as if thought on the basis of (presence or lack of) consciousness (thus rejecting Turing (behavioral) testing as sufficient warrant for mental attribution)has the skeptical consequence Harnad accepts — there is in factno evidence for me that anyone else but me has a mind. I disagree with hisacceptance of it! It would be better to give up the neo-Cartesian faith in private conscious experience underlying Harnad's allegiance to Searle's controversial Chinese Room Experiment than give up all claim to know others think. It would be better to allow that (passing) Turing's Test evidences — evenstrongly evidences — thought.  相似文献   

9.
Case report notes on encounters and exchanges between a clinician and a patient are a rich and irreplaceable source of information in studies of psychopathology. The analysis and exploitation of these notes may be considerably enhanced by transcribing the original notes to computer text files, and subsequently submitting these files to computerized reading. This makes it possible to take account both of qualitative and quantitative features of the behaviour and events described in the notes. Notes taken during encounters with an autistic subject were analyzed in this way. The subject's verbal and gestural repertoires were identified, together with their relative frequencies, their principal associations, and their trends over successive encounters for the items described. The method also made it possible to specify the way in which the Observer was involved in encounters, and his role in them. Major conclusions were that the autistic subject distinctly avoided triadic situations, preferentially pronounced words and phonemes similar to those of his own name, and did not distinguish between the representations he had of persons, objects, places, gestures and words. He also failed to distinguish between the representation he had of himself and of his own name.J.-M. Vidal (Docteur d'Etat, 1976) is Chargé de Recherche CNRS. He studied the behavioral process of attachment in animals before studying discontinuities of mind between animals and humans, and psychopathological processes of non-attachment in autistic subjects. He has published, Motivation et attachement, inEncyclopédie de la Pléiade, Paris: Gallimard, 1987, and Evolution des psychismes et évolution des organismes, inDarwinisme et Société, Paris: Presses Universitaires de France, 1992. R. Quris is Ingénieur de Recherche CNRS. He specializes in the application of linear algebraic models in multivariate analysis which he originally applied to behavioral data from animals. More recently, he extended these applications, with his ANATEXT program, to the analysis of lexical data drawn from clinical dialogues. He is also the author of other multivariate analysis programs:Calmat Matrix Computation Tool, v. 1.4 (1993), Exeter Software, 100 North Country Road, Setauket, NY 11733, andGTABM, gestionnaire de tableaux multiples, v. 2.0 (1994), CNRS 74E, rue de Paris, 3069 Rennes, France.  相似文献   

10.
Summary Equivalence is a fundamental notion for the semantic analysis of algebraic specifications. In this paper the notion of crypt-equivalence is introduced and studied w.r.t. two loose approaches to the semantics of an algebraic specification T: the class of all first-order models of T and the class of all term-generated models of T. Two specifications are called crypt-equivalent if for one specification there exists a predicate logic formula which implicitly defines an expansion (by new functions) of every model of that specification in such a way that the expansion (after forgetting unnecessary functions) is homologous to a model of the other specification, and if vice versa there exists another predicate logic formula with the same properties for the other specification. We speak of first-order crypt-equivalence if this holds for all first-order models, and of inductive crypt-equivalence if this holds for all term-generated models. Characterizations and structural properties of these notions are studied. In particular, it is shown that first order crypt-equivalence is equivalent to the existence of explicit definitions and that in case of positive definability two first-order crypt-equivalent specifications admit the same categories of models and homomorphisms. Similarly, two specifications which are inductively crypt-equivalent via sufficiently complete implicit definitions determine the same associated categories. Moreover, crypt-equivalence is compared with other notions of equivalence for algebraic specifications: in particular, it is shown that first-order cryptequivalence is strictly coarser than abstract semantic equivalence and that inductive crypt-equivalence is strictly finer than inductive simulation equivalence and implementation equivalence.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号