首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
This paper gives a fresh look at my previous work on “epistemic actions” and information updates in distributed systems, from a coalgebraic perspective. I show that the “relational” semantics of epistemic programs, given in [BMS2] in terms of epistemic updates, can be understood in terms of functors on the category of coalgebras and natural transformations associated to them. Then, I introduce a new, alternative, more refined semantics for epistemic programs: programs as “epistemic coalgebras”. I argue for the advantages of this second semantics, from a semantic, heuristic, syntactical and proof-theoretic point of view. Finally, as a step towards a generalization, I show these concepts make sense for other functors, and that apparently unrelated concepts, such as Bayesian belief updates and process transformations, can be seen to arise in the same way as our “epistemic actions”.  相似文献   

3.
“Fuzzy Functions” are proposed to be determined by the least squares estimation (LSE) technique for the development of fuzzy system models. These functions, “Fuzzy Functions with LSE” are proposed as alternate representation and reasoning schemas to the fuzzy rule base approaches. These “Fuzzy Functions” can be more easily obtained and implemented by those who are not familiar with an in-depth knowledge of fuzzy theory. Working knowledge of a fuzzy clustering algorithm such as FCM or its variations would be sufficient to obtain membership values of input vectors. The membership values together with scalar input variables are then used by the LSE technique to determine “Fuzzy Functions” for each cluster identified by FCM. These functions are different from “Fuzzy Rule Base” approaches as well as “Fuzzy Regression” approaches. Various transformations of the membership values are included as new variables in addition to original selected scalar input variables; and at times, a logistic transformation of non-scalar original selected input variables may also be included as a new variable. A comparison of “Fuzzy Functions-LSE” with Ordinary Least Squares Estimation (OLSE)” approach show that “Fuzzy Function-LSE” provide better results in the order of 10% or better with respect to RMSE measure for both training and test cases of data sets.  相似文献   

4.
Model transformations written for an input metamodel may often apply to other metamodels that share similar concepts. For example, a transformation written to refactor Java models can be applicable to refactoring UML class diagrams as both languages share concepts such as classes, methods, attributes, and inheritance. Deriving motivation from this example, we present an approach to make model transformations reusable such that they function correctly across several similar metamodels. Our approach relies on these principal steps: (1) We analyze a transformation to obtain an effective subset of used concepts. We prune the input metamodel of the transformation to obtain an effective input metamodel containing the effective subset. The effective input metamodel represents the true input domain of transformation. (2) We adapt a target input metamodel by weaving it with aspects such as properties derived from the effective input metamodel. This adaptation makes the target metamodel a subtype of the effective input metamodel. The subtype property ensures that the transformation can process models conforming to the target input metamodel without any change in the transformation itself. We validate our approach by adapting well known refactoring transformations (Encapsulate Field, Move Method, and Pull Up Method) written for an in-house domain-specific modeling language (DSML) to three different industry standard metamodels (Java, MOF, and UML).  相似文献   

5.
Some computationally hard problems, e.g., deduction in logical knowledge bases– are such that part of an instance is known well before the rest of it, and remains the same for several subsequent instances of the problem. In these cases, it is useful to preprocess off-line this known part so as to simplify the remaining on-line problem. In this paper we investigate such a technique in the context of intractable, i.e., NP-hard, problems. Recent results in the literature show that not all NP-hard problems behave in the same way: for some of them preprocessing yields polynomial-time on-line simplified problems (we call them compilable), while for other ones their compilability implies some consequences that are considered unlikely. Our primary goal is to provide a sound methodology that can be used to either prove or disprove that a problem is compilable. To this end, we define new models of computation, complexity classes, and reductions. We find complete problems for such classes, “completeness” meaning they are “the less likely to be compilable.” We also investigate preprocessing that does not yield polynomial-time on-line algorithms, but generically “decreases” complexity. This leads us to define “hierarchies of compilability,” that are the analog of the polynomial hierarchy. A detailed comparison of our framework to the idea of “parameterized tractability” shows the differences between the two approaches.  相似文献   

6.
ContextModel transformations play a key role in any software development project based on Model-Driven Engineering principles. However, despite the inherent complexity of developing model transformations, little attention has been paid to the application of MDE principles to the development of model transformations.ObjectiveIn order to: (a) address the inherent complexity of model transformation development and (b) alleviate the problem of the diversity of the languages that are available for model transformation, this paper proposes the application of MDE principles to the development of model transformations. In particular, we have adopted the idea of handling model transformations as transformation models in order to be able to model, transform and generate model transformations.MethodThe proposal follows an MDA-based approach that entails the modeling of model transformations at different abstraction levels and the connection of these models by means of model transformations. It has been empirically validated by conducting a set of case studies following a systematic research methodology.ResultsThe proposal was supported by the introduction of MeTAGeM, a methodological and technical framework for the model-driven development of model transformations that bundles a set of Domain-Specific Languages for modeling model transformations with a set of model transformations in order to bridge these languages and (semi-)automate model transformations development.ConclusionThis paper serves to show that a semi-automatic development process for model transformations is not only desirable but feasible. This process, based on MDE principles, helps to ease the task of developing model transformations and to alleviate interoperability issues between model transformation languages.  相似文献   

7.
In this work an extension to the classical Event Graphs formalism for discrete-event simulation is presented. The extensions are oriented towards the specification of component-based models. The abstract syntax has been defined through meta-modelling. Several methodological issues are discussed, concerning the use of two different meta-modelling levels or collapsing the language into a single one, where “instance-of” relationships are used between processes and their classes. The operational semantics have been defined through graph transformation. This formal definition enables analysis before code is generated from the model. The syntax and semantics of the visual language have been implemented in the multi-paradigm tool AToM3, together with a code generator that produces stand-alone applications able to run the analysed models in real-time.  相似文献   

8.
We propose hGRDDL (pronounced “h-griddle”), a simple mechanism for transforming ad hoc HTML-embedded structured data, such as microformats, into RDFa. This technique preserves the advantages of the original syntax, notably the correspondence between the rendered HTML and the related structured data, and requires little change on the publisher end. RDFa tool developers can leverage the existing deployments of microformats, while focusing new deployments on RDFa for greater extensibility and consistency, all using the same client-side toolset. We provide a prototype implementation of the hGRDDL processor and of transformations for hCard and hCal, two popular microformats.  相似文献   

9.
This volume contains selected papers of the proceedings of the workshop on Uniform Approaches to Graphical Process Specification Techniques (UNIGRA'03). The workshop was held in Warsaw, Poland, on April 5 and 6, 2003, as a satellite event of the sixth European Joint Conference on Theory and Practice of Software (ETAPS 2003). The workshop continues the UNIGRA workshop in 2001 which has been a successful satellite event of ETAPS 2001.Workshop ObjectivesDue to the increasing amount of divergent formalisms, the main idea of the UNIGRA workshops is to bring together people working especially in the following three areas:
• Low Level and High-Level Petri Nets
• Graph Transformation and High-Level Replacement Systems
• Visual Modeling Techniques including UML
In each of these areas there is a large variety of different approaches, however, first attempts for uniform approaches have been made already. According to the main idea and in order to further stimulate the research in this important area, this volume presents some uniform approaches and further introduce unifying and comparative studies across the borders of the three and related areas.Workshop ProgramIn the first part, unifying approaches for low-level and high-level Petri nets are proposed:The contribution by Ehrig shows how the notions occurrence net and process can be generalized from low-level to high-level Petri nets, and studies the behavior and instantiations of this new view of processes for high-level nets.In his overview on new developments in the area of Petri net transformations for Software Engineering, Urbášek presents recent work on net model transformations and net class transformations. Both kinds of transformations are studied with regard to the preservation of system properties such as safety properties or liveness. The formalization of Petri net transformations is originally based on the theory of graph transformation.Padberg considers a case study (the call center of a phone company)which is modeled using Petri net modules for structuring the operational behavior of the system. The notion of Petri net modules was achieved by a transfer from the concepts of algebraic module specifications to the modeling of component-based systems by Petri nets.Desel, Juhás and Lorenz deal with the semantics of place/transition nets. The authors relate the process semantics based on partial orders (individual token semantics) to the collective token semantics by defining partial orders associated to process terms of place/transition nets.In the second part concerning graph transformation and high-level replacement systems, new aspects of component modeling and application of graph transformation techniques are discussed:In their contribution on components for algebra transformation systems, Ehrig and Orejas define a component transformation semantics in terms of the semantics of the specifications included in the components. The underlying formal basis of the instantiation of their generic component framework are algebra transformation systems and high-level replacement rules.An application of the formal unifying framework of distributed transformation units is presented by Kuske and Knirsch. The authors illustrate how different features of agent systems can be modeled by distributed graph transformation systems in a uniform way.Another application for graph rewriting, presented by Van Eetvelde and Janssens, is the modeling of refactoring operations for programs. The authors propose a hierarchical graph representation for programs to facilitate the study of refactoring operation effects at class level.The third part contains contributions focusing on unifying concepts for visual modeling techniques including UML:Minas describes a graphical specification tool for DIAGEN, a diagram editor generator based on hypergraph transformation. The specification tool simplifies the specification and generation of diagram editors. It uses an XML-based specification language and comes with a generic XML editor.In his contribution on dynamic aspects of visual modeling languages, Bottoni proposes an approach to the definition of the syntax and semantics of visual languages based on a notion of transition of production/consumption of resources. Abstract meta-models for this notion of transition are presented.An approach to the model-based verification and validation of properties of UML models is presented by Engels, Kïster, Heckel and Lohmann. The authors use graph transformation techniques as a meta-language for the translation and analysis of models.In model-driven architectures, the problem arises to deal with multiple models. Kent and Smith focus in their contribution on bidirectional mappings between models for software requirements and models for software design as basis for tools checking model traceability and consistency.Program CommitteeThe following program committee of UNIGRA'03 has given valuable scientific support:
• Hartmut Ehrig (TU Berlin, Germany) [chair]
• Roswitha Bardohl (TU Berlin, Germany) [co-chair]
• Luciano Baresi (University of Milano, Italy)
• Paolo Bottoni (University of Pisa, Italy)
• Claudia Ermel (TU Berlin, Germany)
• Reiko Heckel (University of Paderborn, Germany)
• Dirk Janssens (University of Antwerp, Belgium)
• Stuart Kent (University of Kent, Great Britain)
• Hans-Jörg Kreowski (University of Bremen, Germany)
• Fernando Orejas (University of Catalunya, Espania)
• Julia Padberg (University of Bremen, Germany)
• Grzegorz Rozenberg (University of Leiden, The Netherlands)
AcknowledgementThis workshop is supported by the European research training network SegraVis, and by the steering committee of the International Conference on Graph Transformation (ICGT).June 2003, Roswitha Bardohl and Hartmut Ehrig  相似文献   

10.
As digital interfaces increasingly mediate our access to information, the design of these interfaces becomes increasingly important. Designing digital interfaces requires writers to make rhetorical choices that are sometimes technical in nature and often correspond with principles taught in the computer science subfield of human-computer interaction. We propose that an HCI-informed writing pedagogy can complicate for both writing and computer science students the important role audience should play when designing traditional and digital interfaces. Although it is a subtle shift in many ways, this pedagogy seemed to complicate student understanding of the relationship between audience and the texts/interfaces they created: it was not just the “human” (beliefs, attitudes, values, demographics) or the “computer” (the software or hardware or other types of mediation) that mattered but rather the “interaction” between the two. First we explore some of the ways in which writing code and writing prose have merged and paved the way for an HCI-informed writing pedagogy. Next we examine some parallels between human-computer interaction principles and composition principles. Finally, we refer to assignments, student responses, and anecdotal evidence from our classes where an HCI-informed writing pedagogy drew—or could have drawn—student attention more acutely to various audience-related technical and rhetorical interface design choices.  相似文献   

11.
There has been much talk over the past two decades about commercialization of the mobile ad hoc network (MANET) technology. Potential ad hoc network applications with some commercial appeal are now finally emerging, “drafted” by the enormously successful wireless LAN technology. Closely coupled to commercial applications and critically dependent on commercial ad hoc networks will be the “pervasive computing” applications. Since military and civilian emergency MANETs have been around for over three decades, and since the Government has continuously supported MANET research for as many years, it may seem natural to assume that all the research has already been done and that commercial MANETs can be deployed by simply leveraging the military and civilian research results. Unfortunately, there is a catch. Commercial MANETs (and therefore pervasive computing applications) will evolve in a way totally different from their military counterparts. Most importantly, they will start small, and will initially be tethered to the Internet. They will be extremely cost-aware. They will also need to cater to a variety of different applications. This is in sharp contrast with the large scale, autonomous, special purpose and cost insensitive military networks. In this paper we review a typical “battlefield” MANET application and contrast it to two emerging commercial MANET scenarios—the urban vehicle grid and the Campus network. We compare characteristics and design goals and make the case for new research to help kick off commercial MANETs. In particular we argue that P2P technology will be critical in the early evolution of commercial MANETs and identify research directions for P2P MANETs.  相似文献   

12.
Defining operational semantics for a process algebra is often based either on labeled transition systems that account for interaction with a context or on the so-called reduction semantics: we assume to have a representation of the whole system and we compute unlabeled reduction transitions (leading to a distribution over states in the probabilistic case). In this paper we consider mixed models with states where the system is still open (towards interaction with a context) and states where the system is already closed. The idea is that (open) parts of a system “P” can be closed via an operator “PG” that turns already synchronized actions whose “handle” is specified inside “G” into prioritized reduction transitions (and, therefore, states performing them into closed states). We show that we can use the operator “PG” to express multi-level priorities and external probabilistic choices (by assigning weights to handles inside G), and that, by considering reduction transitions as the only unobservable τ transitions, the proposed technique is compatible, for process algebra with general recursion, with both standard (probabilistic) observational congruence and a notion of equivalence which aggregates reduction transitions in a (much more aggregating) trace based manner. We also observe that the trace-based aggregated transition system can be obtained directly in operational semantics and we present the “aggregating” semantics. Finally, we discuss how the open/closed approach can be used to also express discrete and continuous (exponential probabilistic) time and we show that, in such timed contexts, the trace-based equivalence can aggregate more with respect to traditional lumping based equivalences over Markov Chains.  相似文献   

13.
Multi-stream interactive systems can be seen as “hidden adversary” systems (HAS), where the observable behaviour on any interaction channel is affected by interactions happening on other channels. One way of modelling HAS is in the form of a multi-process I/O automata, where each interacting process appears as a token in a shared state space. Constraints in the state space specify how the dynamics of one process affects other processes. We define the “liveness criterion” of each process as the end objective to be achieved by the process. The problem now for each process is to achieve this objective in the face of unforeseen interferences from other processes. In an earlier paper, it was proposed that this uncertainty can be mitigated by collaboration among the disparate processes. Two types of collaboration philosophies were also suggested: altruistic collaboration and pragmatic collaboration. This paper addresses the HAS validation problem where processes collaborate altruistically.  相似文献   

14.
The Model‐Driven Engineering paradigm is aimed at raising the abstraction level of Software Engineering approaches through the systematic use of models as primary artifacts, not only in software design and development, but also to understand, interact, configure, and modify the runtime behavior of software. It tries to overcome the wall between the documentation and the real state of the implementation. For that matter, our long‐term goal seeks to reach a higher degree of interoperability among available meta‐modeling technologies through bridges among technological spaces (TS bridges). The proposed system provides several ATL (ATLAS Transformation Language) transformations that enable the application of measuring operations over ATL transformation models and rules, and the generation of different complementary end‐user models, such as SVG charts and (X)HTML reports. For this work, we have evaluated a set of meta‐modeling TS bridges among UML, MOF, Ecore, KM3, and Microsoft DSL Tools. These results provide quantitative measurements of the declarative and imperative constructs of these transformations and relative quality factors as well. In addition to this, all the top‐level results extracted from the measurement of these TS bridges are merged into one unique model in order to assist in performing a comparative study among them. This comparative study suggests that it is feasible to apply automatic transformations over transformation models, i.e. meta‐transformations. In this regard, there are many open research trends towards complete management, validation, optimization, and inference of TS bridges between complementary meta‐modeling technologies. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

15.
Numerous models of concurrency have been considered in the framework of automata. Among the more interesting concurrency models are classical nondeterminism and pure concurrency, the two facets of alternation, and the bounded concurrency model. Bounded concurrency was previously considered to be similar to nondeterminism and pure concurrency in the sense of the succinctness of automata augmented with these features. In this paper we show that, when viewed more broadly, the power (of succinctness) of bounded concurrency is in fact most similar to the power of alternation. Our contribution is that, just like nondeterminism and pure concurrency are “complement equivalent,” bounded concurrency and alternation are “reverse equivalent” over finite automata. The reverse equivalence is expressed by the existence of polynomial transformations, in both directions, between bounded concurrency and alternation for the reverse of the language accepted by the other. It follows, that bounded concurrency is double-exponentially more succinct than DFAs with respect to reverse, while alternation only saves one exponent. This is as opposed to the direct case where alternation saves two exponents and bounded concurrency saves only one. An immediate corollary is that for languages over a one-letter alphabet, bounded concurrency and alternation are equivalent. We complete the picture of succinctness results for these languages by considering the different combinations of the concurrency models using additional lower bounds.  相似文献   

16.
17.
In this paper processes specifiable over a non-uniform language are considered. The language contains constants for a set of atomic actions and constructs for alternative and sequential composition. Furthermore it provides a mechanism for specifying processes recursively (including nested recursion). We consider processes as having a state: atomic actions are to be specified in terms of observable behaviour (relative to initial states) and state transformations. Any process having some initial state can be associated with a transition system representing all possible courses of execution. This leads to an operational semantics in the style of Plotkin. The partial correctness assertion {α} p{β} expresses that for any transition system associated with the process p and having some initial state satisfying α, its final states representing successful execution satisfy β. A logic in the style of Hoare, containing a proof system for deriving partial correctness assertions, is presented. This proof system is sound and relatively complete, so any partial correctness assertion can be evaluated by investigating its derivability. Included is a short discussion about the extension of the process language with “guarded recursion”. It appears that such an extension violates the completeness of the Hoare logic. This reveals a remarkable property of Scott's induction rule in the context of non-determinism: only regular recursion allows a completeness result.  相似文献   

18.
A regularization method for cheap periodic control problems is designed. The periodic problem for singularly perturbed Riccati matrix differential equations is reduced to an extended system of singularly perturbed initial problems. This system admits the application of a geometric approach based on the use of integral manifolds and, consequently, dimensional reduction. The solution is asymptotically expanded in fractional powers of a small parameter. An example on control of periodic oscillations of a mechanical system is given.__________Translated from Avtomatika i Telemekhanika, No. 6, 2005, pp. 59–73.Original Russian Text Copyright © 2005 by Smetannikova, Sobolev.This work was supported in part by the Russian Foundation for Basic Research, project no. 04-01-96515, Presidium of the Russian Academy of Sciences, project no. 19, and Boole Centre for Research in Informatics, UCC, Cork, Ireland, under the “Basic Research and Higher Education” Program, CRDF project no. SA-014-02.  相似文献   

19.
A given polynomial of degree less than or equal to n naturally “blossoms” into a function of n variables called its blossom. Considered as a polynomial function of degree less than or equal to (n+1) it “blossoms” into a “new” blossom which is now a function of (n+1) variables. A classical formula expresses any value of this new blossom as a strictly convex combination of (n+1) values of the initial one. We establish a similar formula for Chebyshevian blossoms.  相似文献   

20.
The problem of finding so-called pirates via the minimum Hamming distance decoding is considered for the simplest case of two pirates. We prove that for all q ≥ 3 there exist “good” q-ary codes capable of finding at least one pirate by the minimum distance decoding in the Hamming metric.__________Translated from Problemy Peredachi Informatsii, No. 2, 2005, pp. 123–127.Original Russian Text Copyright © 2005 by Kabatiansky.Supported in part by the Russian Foundation for Basic Research, project no. 03-01-00098.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号