首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Visual languages (VLs) facilitate software development by not only supporting communication and abstraction, but also by generating various artifacts such as code and reports from the same high-level specification. VLs are thus often translated to other formalisms, in most cases with bidirectionality as a crucial requirement to, e.g., support re-engineering of software systems. Triple Graph Grammars (TGGs) are a rule-based language to specify consistency relations between two (visual) languages from which bidirectional translators are automatically derived. TGGs are formally founded but are also limited in expressiveness, i.e., not all types of consistency can be specified with TGGs. In particular, 1-to-n correspondence between elements depending on concrete input models cannot be described. In other words, a universal quantifier over certain parts of a TGG rule is missing to generalize consistency to arbitrary size. To overcome this, we transfer the well-known multi-amalgamation concept from algebraic graph transformation to TGGs, allowing us to mark certain parts of rules as repeated depending on the translation context. Our main contribution is to derive TGG-based translators that comply with this extension. Furthermore, we identify bad smells on the usage of multi-amalgamation in TGGs, prove that multi-amalgamation increases the expressiveness of TGGs, and evaluate our tool support.  相似文献   

2.
The Model Driven Architecture (MDA) is an approach to develop software based on different models. There are separate models for the business logic and for platform specific details. Moreover, code can be generated automatically from these models. This makes transforma- tions a core technology for MDA and for model-based software engineering approaches in general. Query/View/Transformation (QVT) is the transformation technology recently proposed for this purpose by the OMG. Triple Graph Grammars (TGGs) are another transformation technology proposed in the mid-nineties, used for example in the FUJABA CASE tool. In contrast to many other transformation technologies, both QVT and TGGs declaratively define the relation between two models. With this definition, a transformation engine can execute a transformation in either direction and, based on the same definition, can also propagate changes from one model to the other. In this paper, we compare the concepts of the declarative languages of QVT and TGGs. It turns out that TGGs and declarative QVT have many concepts in common. In fact, QVT-Core can be mapped to TGGs. We show that QVT-Core can be implemented by transforming QVT-Core mappings to TGG rules, which can then be executed by a TGG transformation engine that performs the actual QVT transformation. Furthermore, we discuss an approach for mapping QVT-Relations to TGGs. Based on the semantics of TGGs, we clarify semantic gaps that we identified in the declarative languages of QVT and, furthermore, we show how TGGs can benefit from the concepts of QVT.  相似文献   

3.
Multisets generalize sets by allowing elements to have repetitions. In this paper, we study from a formal perspective representations of multiset variables, and the consistency and propagation of constraints involving multiset variables. These help us model problems more naturally and can, for example, prevent introducing unnecessary symmetries into a model. We identify a number of different representations for multiset variables, compare them in terms of effectiveness and efficiency, and propose inference rules to enforce bounds consistency for the representations. In addition, we propose to exploit the variety of a multiset—the number of distinct elements in it—to improve modeling expressiveness and further enhance constraint propagation. We derive a number of inference rules involving the varieties of multiset variables. The rules interact varieties with the traditional components of multiset variables (such as cardinalities) to obtain stronger propagation. We also demonstrate how to apply the rules to perform variety reasoning on some common multiset constraints. Experimental results show that performing variety reasoning on top of cardinality reasoning can effectively reduce more search space and achieve better runtime in solving some multiset CSPs.  相似文献   

4.
Building on previous work (15,8), this paper describes two syntactic ways of defining ‘well-behaved’ operational semantics for timed processes. In both cases, the semantic rules are derived from abstract operational rules for behaviour comonads and thus ensure congruence results. The first of them, a light-weight attempt using schematic rules, is shown to be sound, i.e., to indeed induce abstract rules as introduced in [8]. Then a second format, based on a new and very general kind of abstract rules, comonadic SOS (CSOS), is presented which uses meta rules and is also complete, i.e., it characterises all possible CSOS rules for timed processes.  相似文献   

5.
The shapes of our cities change very frequently. These changes have to be reflected in data sets representing urban objects. However, it must be assured that frequent updates do not affect geometric-topological consistency. This important aspect of spatial data quality guarantees essential assumptions on which users and applications of 3D city models rely: viz. that objects do not intersect, overlap or penetrate mutually, or completely cover one another. This raises the question how to guarantee that geometric-topological consistency is preserved when data sets are updated. Hence, there is a certain risk that plans and decisions which are based on these data sets are erroneous and that the tremendous efforts spent for data acquisition and updates become vain. In this paper, we solve this problem by presenting efficient transaction rules for updating 3D city models. These rules guarantee that geometric-topological consistency is preserved (Safety) and allow for the generation of arbitrary consistent 3D city models (Completeness). Safety as well as completeness is proven with mathematical rigor, guaranteeing the reliability of our method. Our method is applicable to 3D city models, which define—besides the terrain surface—complex spatial objects like buildings with rooms and storeys as interior structures, as well as bridges and tunnels. Those objects are represented as aggregations of solids, and their surfaces are complex from a topology point of view. 3D GIS models like CityGML, which are widely used to represent cities, provide the means to define semantics, geometry and topology, but do not address the problem of maintaining consistency. Hence, our approach complements CityGML.  相似文献   

6.
Towards certain fixes with editing rules and master data   总被引:1,自引:0,他引:1  
A variety of integrity constraints have been studied for data cleaning. While these constraints can detect the presence of errors, they fall short of guiding us to correct the errors. Indeed, data repairing based on these constraints may not find certain fixes that are guaranteed correct, and worse still, may even introduce new errors when attempting to repair the data. We propose a method for finding certain fixes, based on master data, a notion of certain regions, and a class of editing rules. A certain region is a set of attributes that are assured correct by the users. Given a certain region and master data, editing rules tell us what attributes to fix and how to update them. We show how the method can be used in data monitoring and enrichment. We also develop techniques for reasoning about editing rules, to decide whether they lead to a unique fix and whether they are able to fix all the attributes in a tuple, relative to master data and a certain region. Furthermore, we present a framework and an algorithm to find certain fixes, by interacting with the users to ensure that one of the certain regions is correct. We experimentally verify the effectiveness and scalability of the algorithm.  相似文献   

7.
In one-sided forbidding grammars, the set of rules is divided into the set of left forbidding rules and the set of right forbidding rules. A left forbidding rule can rewrite a non-terminal if each of its forbidding symbols is absent to the left of the rewritten symbol in the current sentential form, while a right forbidding rule is applied analogically except that this absence is verified to the right. Apart from this, they work like ordinary forbidding grammars. As its main result, this paper proves that one-sided forbidding grammars are equivalent to selective substitution grammars. This equivalence is established in terms of grammars with and without erasing rules. Furthermore, this paper proves that one-sided forbidding grammars in which the set of left forbidding rules coincides with the set of right forbidding rules characterize the family of context-free languages. In the conclusion, the significance of the achieved results is discussed.  相似文献   

8.
The model-driven software development paradigm requires that appropriate model transformations are applicable in different stages of the development process. The transformations have to consistently propagate changes between the different involved models and thus ensure a proper model synchronization. However, most approaches today do not fully support the requirements for model synchronization and focus only on classical one-way batch-oriented transformations. In this paper, we present our approach for an incremental model transformation which supports model synchronization. Our approach employs the visual, formal, and bidirectional transformation technique of triple graph grammars. Using this declarative specification formalism, we focus on the efficient execution of the transformation rules and how to achieve an incremental model transformation for synchronization purposes. We present an evaluation of our approach and demonstrate that due to the speedup for the incremental processing in the average case even larger models can be tackled.
Robert Wagner (Corresponding author)Email:
  相似文献   

9.
Knowledge capturing methodology in process planning   总被引:1,自引:0,他引:1  
In process planning, a proper methodology for capturing knowledge is essential for constructing a knowledge base that can be maintained and shared. A knowledge base should not merely be a set of rules, but a framework of process planning that can be controlled and customized by rules. For the construction of a knowledge base, identifying the types of knowledge elements to be included is a prerequisite. To identify the knowledge elements, this paper employs a three-phase modeling methodology consisting of three sub-models: object model, functional model and dynamic model. By making use of the three-phase modeling methodology, four knowledge elements for process planning are derived: facts (from the object model), constraints (from the functional model), and way of thinking and rules (from the dynamic model). facts correspond to the involved data objects, and constraints to the technological constraints of process planning. The way of thinking is a logical procedure for quickly decreasing the solution space, and rules are key parameters to control the way of thinking. The proposed methodology is applied to the process planning of hole making.  相似文献   

10.
Summary There is an increasing demand for a new type of mathematical systems theory which would include treatment of non-trivial synchronization problems and thus could serve as a tool for design and implementation of information systems. Such systems can be characterized as dynamical systems consisting of many concurrently working information processing elements, e.g. computers and/or human beings.As a basis for studying these information systems a better understanding of the fundamental characteristics of information flow is required. One such characteristic is the simple synchronization of the flow of messages. A mathematical model for this synchronization is a directed graph along the paths of which tokens (objects with no properties) can move. Transition of tokens across a vertex of a path is effected by elementary events. An event may occur at a vertex whenever there is at least one token on each incoming edge of this vertex. With each occurrence of an event the number of tokens on each incoming edge is decreased by one, an on each outgoing edge is increased by one. These graphs shall be called synchronization graphs.The mathematical properties of synchronization graphs are studied in this paper. The discussion centers on necessary and sufficient conditions for liveness (exclusion of deadlocks) and safety (observance of capacity limits). The relationship between synchronization graphs and Linear Algebra is demonstrated and used both to obtain theoretical results and to offer practical methods for systems analysis.  相似文献   

11.
Modern domain-specific modeling (DSM) frameworks provide refined techniques for developing new languages based on the clear separation of conceptual elements of the language (called abstract syntax) and their graphical visual representation (called concrete syntax). This separation is usually achieved by recording traceability information between the abstract and concrete syntax using mapping models. However, state-of-the-art DSM frameworks impose severe restrictions on traceability links between elements of the abstract syntax and the concrete syntax. In the current paper, we propose a mapping model which allows to define arbitrarily complex mappings between elements of the abstract and concrete syntax. Moreover, we demonstrate how live model transformations can complement mapping models in providing bidirectional synchronization and implicit traceability between models of the abstract and the concrete syntax. In addition, we introduce a novel architecture for DSM environments which enables these concepts, and provide an overview of the tool support.  相似文献   

12.
Fundamental properties of model transformations based on triple graph grammars (TGGs) have been studied extensively including syntactical correctness, completeness, termination and functional behavior. But up to now, it is an open problem how domain specific properties that are valid for a source model can be preserved along model transformations such that the transformed properties are valid for the derived target model. This question shows up in enterprise modeling. Here, modeling activities related to different domains are handled by different parties, and their models need to be consistent and integrated into one holistic enterprise model later on. So, support for decentralized modeling processes is needed. One technical aspect of the needed support in this case is the (bidirectional) propagation of constraints because that enables one party to understand and check the constraints of another party. Therefore, we analyze in the framework of TGGs how to propagate constraints from a source model to an integrated model and, afterwards, to a target model, such that, whenever the source model satisfies the source constraint, also the integrated and target model satisfy the corresponding integrated and target constraint. In our main new results we show under which conditions this is possible.  相似文献   

13.
Summary.  In recent years, there is a growing tendency to support high-level synchronization operations, such as read-modify-write, FIFO queues and stacks, as part of the programmer’s shared memory model. This paper examines the problem of implementing hybrid consistency with high-level synchronization operations. It is shown that for any implementation of weak consistency, the time required to execute a read-modify-write, a dequeue or a pop operation is Ω(d), where d is the network delay. Following this, an efficient and simple algorithm for providing hybrid consistency that supports most types of high-level synchronization operations and weak read and weak write operations is presented. Weak read and weak write operations are executed instantaneously, while the time required to execute strong operations is O(d). This is within a constant factor of the lower bounds for most of the commonly used types of operations. Received: August 1994 / Accepted: June 1995  相似文献   

14.
Interest in the Web services (WS) composition (WSC) paradigm is increasing tremendously. A real shift in distributed computing history is expected to occur when the dream of implementing Service-Oriented Architecture (SOA) is realized. However, there is a long way to go to achieve such an ambitious goal. In this paper, we support the idea that, when challenging the WSC issue, the earlier that the inevitability of failures is recognized and proper failure-handling mechanisms are defined, from the very early stage of the composite WS (CWS) specification, the greater are the chances of achieving a significant gain in dependability. To formalize this vision, we present the FENECIA (Failure Endurable Nested-transaction based Execution of Composite Web services with Incorporated state Analysis) framework. Our framework approaches the WSC issue from different points of view to guarantee a high level of dependability. In particular, it aims at being simultaneously a failure-handling-devoted CWS specification, execution, and quality of service (QoS) assessment approach. In the first section of our framework, we focus on answering the need for a specification model tailored for the WS architecture. To this end, we introduce WS-SAGAS, a new transaction model. WS-SAGAS introduces key concepts that are not part of the WS architecture pillars, namely, arbitrary nesting, state, vitality degree, and compensation, to specify failure-endurable CWS as a hierarchy of recursively nested transactions. In addition, to define the CWS execution semantics, without suffering from the hindrance of an XML-based notation, we describe a textual notation that describes a WSC in terms of definition rules, composability rules, and ordering rules, and we introduce graphical and formal notations. These rules provide the solid foundation needed to formulate the execution semantics of a CWS in terms of execution correctness verification dependencies. To ensure dependable execution of the CWS, we present in the second section of FENECIA our architecture THROWS, in which the execution control of the resulting CWS is distributed among engines, discovered dynamically, that communicate in a peer-to-peer fashion. A dependable execution is guaranteed in THROWS by keeping track of the execution progress of a CWS and by enforcing forward and backward recovery. We concentrate in the third section of our approach on showing how the failure consideration is trivial in acquiring more accurate CWS QoS estimations. We propose a model that assesses several QoS properties of CWS, which are specified as WS-SAGAS transactions and executed in THROWS. We validate our proposal and show its feasibility and broad applicability by describing an implemented prototype and a case study.  相似文献   

15.
This paper reports the results of a controlled experiment undertaken to investigate whether the methodology support offered by a CASE tool does have an impact on the tool’s acceptance and actual use by individuals.Subjects used the process modelling tool SPEARMINT to complete a partial process model and remove all inconsistencies. Half the subjects used a variant of SPEARMINT that corrected consistency violations automatically and silently, whilst the other half used a variant of SPEARMINT that told them about inconsistencies both immediately and persistently but without automatic correction. Measurement of acceptance and prediction of actual use was based on the technology acceptance model, supplemented by beliefs about consistency rules. The impact of form of automated consistency assurance applied or hierarchical consistency rules was found to be significant at the 0.05 level with a type I error of 0.027, explaining 71.6% of the variance in CASE tool acceptance. However, intention to use and thus predicted use was of the same size for both variants of SPEARMINT, whereas perceived usefulness and perceived ease of use were affected contrarily.Internal validity of the findings was threatened by validity and reliability issues related to beliefs about consistency rules. Here, further research is needed to develop valid constructs and reliable scales. Following the experiment, a small survey among experienced users of SPEARMINT found that different forms of automated consistency assurance were preferred depending on individual, consistency rule, and task characteristics. Based on these findings, it is recommended that vendors should provide CASE tools with adaptable methodology support, which allow their users to fit automated consistency assurance to the task at hand.This work originates from the author’s time at the Fraunhofer Institute or Experimental Software Engineering (IESE), Sauerwiesen 6, 67661 Kaiserslautern, Germany.  相似文献   

16.
17.
In this paper,a new effective method is proposed to find class association rules (CAR),to get useful class associaiton rules(UCAR)by removing the spurious class association rules (SCAR),and to generate exception class associaiton rules(ECAR)for each UCAR.CAR mining,which integrates the techniques of classification and association,is of great interest recently.However,it has two drawbacks:one is that a large part of CARs are spurious and maybe misleading to users ;the other is that some important ECARs are diffcult to find using traditional data mining techniques .The method introduced in this paper aims to get over these flaws.According to our approach,a user can retrieve correct information from UCARs and konw the influence from different conditions by checking corresponding ECARs.Experimental results demonstrate the effectiveness of our proposed approach.  相似文献   

18.
In generalized one-sided forbidding grammars (GOFGs), each context-free rule has associated a finite set of forbidding strings, and the set of rules is divided into the sets of left and right forbidding rules. A left forbidding rule can rewrite a nonterminal if each of its forbidding strings is absent to the left of the rewritten symbol. A right forbidding rule is applied analogically. Apart from this, they work like any generalized forbidding grammar. This paper proves the following three results. (1) GOFGs where each forbidding string consists of at most two symbols characterize the family of recursively enumerable languages. (2) GOFGs where the rules in one of the two sets of rules contain only ordinary context-free rules without any forbidding strings characterize the family of context-free languages. (3) GOFGs with the set of left forbidding rules coinciding with the set of right forbidding rules characterize the family of context-free languages.  相似文献   

19.
一种基于聚类的数据匿名方法   总被引:10,自引:0,他引:10  
王智慧  许俭  汪卫  施伯乐 《软件学报》2010,21(4):680-693
为了防止个人隐私的泄漏,在数据共享前需要对其在准标识符上的属性值作数据概化处理,以消除链接攻击,实现在共享中对敏感属性的匿名保护.概化处理增加了属性值的不确定性,不可避免地会造成一定的信息损失.传统的数据概化处理大都建立在预先定义的概念层次结构的基础上,会造成过度概化,带来许多不必要的信息损失.将准标识符中的属性分为有序属性和无序属性两种类型,分别给出了更为灵活的相应数据概化策略.同时,通过考察数据概化前后属性值不确定性程度的变化,量化地定义了数据概化带来的信息损失.在此基础上,将数据匿名问题转化为带特定约束的聚类问题.针对l-多样模型,提出了一种基于聚类的数据匿名方法L-clustering.该方法能够满足在数据共享中对敏感属性的匿名保护需求,同时能够很好地降低实现匿名保护时概化处理所带来的信息损失.  相似文献   

20.
We study knowledge-based systems using symbolic many-valued logic and we focus on the management of knowledge through linguistic concepts characterized by vague terms or labels. In previous papers we have proposed a symbolic representation of nuanced statements. In this representation, we have interpreated some nuances of natural language as linguistic modifiers and we have defined them within a multiset context. In this paper, we continue the presentation of our symbolic model and we propose new deduction rules dealing with nuanced statements. We limit ourself to present new generalizations of the Modus Ponens rules dealing with nuanced statements.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号