首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
In model-driven software engineering, model transformation plays a key role for automatically generating and updating models. Transformation rules define how source model elements are to be transformed into target model elements. However, defining transformation rules is a complex task, especially in situations where semantic differences or incompleteness allow for alternative interpretations or where models change continuously before and after transformation. This paper proposes constraint-driven modeling where transformation is used to generate constraints on the target model rather than the target model itself. We evaluated the approach on three case studies that address the above difficulties and other common transformation issues. We also developed a proof-of-concept implementation that demonstrates its feasibility. The implementation suggests that constraint-driven transformation is an efficient and scalable alternative and/or complement to traditional transformation.  相似文献   

2.
This paper introduces a formal approach to constraint-aware model transformation which supports specifying constraints in the definition of transformation rules. These constraints are used to control which structure to create in the target model and which constraints to add to the created structure. The proposed approach is classified under heterogeneous, graph-based and out-place model transformations; and illustrated by applying it to a language translation. It is based on the Diagram Predicate Framework which provides a formalisation of (meta)modelling based on category theory and graph transformation. In particular, the proposed approach uses non-deleting transformation rules that are specified by a joined modelling language which is defined by relating the source and target languages. The relation between source and target languages is formalised by morphisms from their corresponding modelling formalisms into a joined modelling formalism. Furthermore, the application of transformation rules is formalised as a pushout construction and the final target model is obtained by a pullback construction.  相似文献   

3.
4.
We present a phrase-based statistical machine translation approach which uses linguistic analysis in the preprocessing phase. The linguistic analysis includes morphological transformation and syntactic transformation. Since the word-order problem is solved using syntactic transformation, there is no reordering in the decoding phase. For morphological transformation, we use hand-crafted transformational rules. For syntactic transformation, we propose a transformational model based on a probabilistic context-free grammar. This model is trained using a bilingual corpus and a broad-coverage parser of the source language. This approach is applicable to language pairs in which the target language is poor in resources. We considered translation from English to Vietnamese and from English to French. Our experiments showed significant BLEU-score improvements in comparison with Pharaoh, a state-of-the-art phrase-based SMT system.  相似文献   

5.
6.
The QVT Relations (QVT-R) transformation language allows the definition of bidirectional model transformations, which are required in cases where two (or more) models must be kept consistent in the face of changes to either or both. A QVT-R transformation can be used either in checkonly mode, to determine whether a target model is consistent with a given source model, or in enforce mode, to change the target model. A precise understanding of checkonly mode transformations is prerequisite to a precise understanding of enforce mode transformations, and this is the focus of this paper. In order to give semantics to checkonly QVT-R transformations, we need to consider the overall structure of the transformation as given by when and where clauses, and the role of trace classes. In the standard, the semantics of QVT-R are given both directly, and by means of a translation to QVT Core, a language which is intended to be simpler. In this paper, we argue that there are irreconcilable differences between the intended semantics of QVT-R and those of QVT Core, so that no translation from QVT-R to QVT Core can be semantics-preserving, and hence no such translation can be helpful in defining the semantics of QVT-R. Treating QVT-R directly, we propose a simple game-theoretic semantics. We demonstrate its behaviour on examples and show how it can be used to prove an example result comparing two QVT-R transformations. We demonstrate that consistent models may not possess a single trace model whose objects can be read as traceability links in either direction. We briefly discuss the effect of variations in the rules of the game, to elucidate some design choices available to the designers of the QVT-R language.  相似文献   

7.
Software metrics rarely follow a normal distribution. Therefore, software metrics are usually transformed prior to building a defect prediction model. To the best of our knowledge, the impact that the transformation has on cross-project defect prediction models has not been thoroughly explored. A cross-project model is built from one project and applied on another project. In this study, we investigate if cross-project defect prediction is affected by applying different transformations (i.e., log and rank transformations, as well as the Box-Cox transformation). The Box-Cox transformation subsumes log and other power transformations (e.g., square root), but has not been studied in the defect prediction literature. We propose an approach, namely Multiple Transformations (MT), to utilize multiple transformations for cross-project defect prediction. We further propose an enhanced approach MT+ to use the parameter of the Box-Cox transformation to determine the most appropriate training project for each target project. Our experiments are conducted upon three publicly available data sets (i.e., AEEEM, ReLink, and PROMISE). Comparing to the random forest model built solely using the log transformation, our MT+ approach improves the F-measure by 7, 59 and 43% for the three data sets, respectively. As a summary, our major contributions are three-fold: 1) conduct an empirical study on the impact that data transformation has on cross-project defect prediction models; 2) propose an approach to utilize the various information retained by applying different transformation methods; and 3) propose an unsupervised approach to select the most appropriate training project for each target project.  相似文献   

8.
A major concern in model-driven engineering is how to ensure the quality of the model-transformation mechanisms. One validation method that is commonly used is model transformation testing. When using this method, two important issues need to be addressed: the efficient generation/selection of test cases and the definition of oracle functions that assess the validity of the transformed models. This work is concerned with the latter. We propose a novel oracle function for model transformation testing that relies on the premise that the more a transformation deviates from well-known good transformation examples, the more likely it is erroneous. More precisely, the proposed oracle function compares target test cases with a base of examples that contains good quality transformation traces, and then assigns a risk level to them accordingly. Our approach takes inspiration from the biological metaphor of immune systems, where pathogens are identified by their difference with normal body cells. A significant feature of the approach is that one no longer needs to define an expected model for each test case. Furthermore, the detected faulty candidates are ordered by degree of risk, which helps the tester inspect the results. The validation results on a transformation mechanism used by an industrial partner confirm the effectiveness of our approach.  相似文献   

9.
杨潇  万建成  侯金奎 《计算机工程》2007,33(23):45-47,5
通过对模型描述语言的语法结构和语义表达特性的抽象分析,提出了一种基于语义重构的模型映射方法。该方法通过归纳分析建立抽象目标语义模型,基于源模型语义域在目标语义域中的重新构造,以目标语义模型为中介建立源模型到目标模型的映射关系。该方法不仅可为模型转换的具体实现提供理论指导,还可为验证不同抽象层次模型之间映射关系的正确性提供依据。以JSF+EJB为目标平台阐述了该方法的应用。  相似文献   

10.
We present a general method for transferring skeletons and skinning weights between characters with distinct mesh topologies. Our pipeline takes as inputs a source character rig (consisting of a mesh, a transformation hierarchy of joints, and skinning weights) and a target character mesh. From these inputs, we compute joint locations and orientations that embed the source skeleton in the target mesh, as well as skinning weights to bind the target geometry to the new skeleton. Our method consists of two key steps. We first compute the geometric correspondence between source and target meshes using a semi‐automatic method relying on a set of markers. The resulting geometric correspondence is then used to formulate attribute transfer as an energy minimization and filtering problem. We demonstrate our approach on a variety of source and target bipedal characters, varying in mesh topology and morphology. Several examples demonstrate that the target characters behave well when animated with either forward or inverse kinematics. Via these examples, we show that our method preserves subtle artistic variations; spatial relationships between geometry and joints, as well as skinning weight details, are accurately maintained. Our proposed pipeline opens up many exciting possibilities to quickly animate novel characters by reusing existing production assets.  相似文献   

11.
Toward a lexicalized grammar for interlinguas   总被引:1,自引:0,他引:1  
In this paper we present one aspect of our research on machine translation (MT): capturing the grammatical and computational relation between (i) the interlingua (IL) as defined declaratively in the lexicon and (ii) the IL as defined procedurally by way of algorithms that compose and decompose pivot IL forms. We begin by examining the interlinguas in the lexicons of a variety of current IL-based approaches to MT. This brief survey makes it clear that no consensus exists among MT researchers on the level of representation for defining the IL. In the section that follows, we explore the consequences of this missing formal framework for MT system builders who develop their own lexical-IL entries. The lack of software tools to support rapid IL respecification and testing greatly hampers their ability to modify representations to handle new data and new domains. Our view is that IL-based MT research needs both (a) the formal framework to specify possible IL grammars and (b) the software support tools to implement and test these grammars. With respect to (a), we propose adopting a lexicalized grammar approach, tapping research results from the study oftree grammars for natural language syntax. With respect to (b), we sketch the design and functional specifications for parts of ILustrate, the set of software tools that we need to implement and test the various IL formalisms that meet the requirements of a lexicalized grammar. In this way, we begin to address a basic issue in MT research, how to define and test an interlingua as a computational language — without building a full MT system for each possible IL formalism that might be proposed.  相似文献   

12.
An important issue that needs to be addressed when using data mining tools is the validity of the rules outside of the data set from which they are generated. Rules are typically derived from the patterns in a particular data set. When a new situation occurs, the change in the set of rules obtained from the new data set could be significant. In this paper, we provide a novel model for understanding how the differences between two situations affect the changes of the rules, based on the concept of fine partitioned groups that we call caucuses. Using this model, we provide a simple technique called combination data set, to get a good estimate of the set of rules for a new situation. Our approach works independently of the core mining process and it can be easily implemented with all variations of rule mining techniques. Through experiments with real-life and synthetic data sets, we show the effectiveness of our technique in finding the correct set of rules under different situations.  相似文献   

13.
One of the responsibilities of requirements engineering is to transform stakeholder requirements into system and software requirements. For enterprise systems, this transformation must consider the enterprise context where the system will be deployed. Although there are some approaches for detailing stakeholder requirements, some of them even considering the enterprise context, this task is executed manually. Based on model-driven engineering concepts, this study proposes a semi-automatic transformation from an enterprise model to a use case model. The enterprise model is used as a source of information about the stakeholder requirements and domain knowledge, while the use case model is used as software requirements model. This study presents the source and target metamodels, a set of transformation rules, and a tool to support the transformation. An experiment analyzes the use of the proposed transformation to investigate its benefits and if it can be used in practice, from the point of view of students in the context of a requirements refinement. The results indicate that the approach can be used in practice, as it did not influence the quality of the generated use cases. However, the empirical analysis does not indicate benefits of using the transformation, even if the qualitative results were positive.  相似文献   

14.
根据MDA中模型自动转换到代码的特点,提出了一种UML类图中关联关系到代码的转换方法。讨论了UML中关联关系及其两种实现模式,对每种模式分别定义了一套从UML模型(平台独立模型)到Java模型(平台相关模型)的变换规则,给出了两种实现模式按规则转换的实例。  相似文献   

15.
元模型支持下的模型转换   总被引:1,自引:0,他引:1  
模型转换是MDA的核心思想,包括模型到代码的转换和模型到模型的转换。文章所提出的模型转换方法用元模型表示转换规则,通过基于图转换的模型转换,最终得到目标模型的元模型表达。元模型支持下的模型转换可以对转换规则进行精确的描述,使转换具有明确的语义,更便于工具实现。  相似文献   

16.
A standard approach to determining decision trees is to learn them from examples. A disadvantage of this approach is that once a decision tree is learned, it is difficult to modify it to suit different decision making situations. Such problems arise, for example, when an attribute assigned to some node cannot be measured, or there is a significant change in the costs of measuring attributes or in the frequency distribution of events from different decision classes. An attractive approach to resolving this problem is to learn and store knowledge in the form of decision rules, and to generate from them, whenever needed, a decision tree that is most suitable in a given situation. An additional advantage of such an approach is that it facilitates buildingcompact decision trees, which can be much simpler than the logically equivalent conventional decision trees (by compact trees are meant decision trees that may contain branches assigned aset of values, and nodes assignedderived attributes, i.e., attributes that are logical or mathematical functions of the original ones). The paper describes an efficient method, AQDT-1, that takes decision rules generated by an AQ-type learning system (AQ15 or AQ17), and builds from them a decision tree optimizing a given optimality criterion. The method can work in two modes: thestandard mode, which produces conventional decision trees, andcompact mode, which produces compact decision trees. The preliminary experiments with AQDT-1 have shown that the decision trees generated by it from decision rules (conventional and compact) have outperformed those generated from examples by the well-known C4.5 program both in terms of their simplicity and their predictive accuracy.  相似文献   

17.
To support heterogeneity is a major requirement in current approaches to integration and transformation of data. This paper proposes a new approach to the translation of schema and data from one data model to another, and we illustrate its implementation in the tool MIDST-RT.We leverage on our previous work on MIDST, a platform conceived to perform translations in an off-line fashion. In such an approach, the source database (both schema and data) is imported into a repository, where it is stored in a universal model. Then, the translation is applied within the tool as a composition of elementary transformation steps, specified as Datalog programs. Finally, the result (again both schema and data) is exported into the operational system.Here we illustrate a new, lightweight approach where the database is not imported. MIDST-RT needs only to know the schema of the source database and the model of the target one, and generates views on the operational system that expose the underlying data according to the corresponding schema in the target model. Views are generated in an almost automatic way, on the basis of the Datalog rules for schema translation.The proposed solution can be applied to different scenarios, which include data and application migration, data interchange, and object-to-relational mapping between applications and databases.  相似文献   

18.
This paper proposes the use of equivalence partitioning techniques for testing models and model transformations. In particular, we introduce the concept of classifying terms, which are general OCL terms on a class model enriched with OCL constraints. Classifying terms permit defining equivalence classes, in particular for partitioning the source and target model spaces of the transformation, defining for each class a set of equivalent models with regard to the transformation. Using these classes, a model validator tool is able to automatically construct object models for each class, which constitute relevant test cases for the transformation. We show how this approach of guiding the construction of test cases in an orderly, systematic and efficient manner can be effectively used in combination with Tracts for testing both directional and bidirectional model transformations and for analyzing their behavior.  相似文献   

19.
Presents a systematic approach to the development of message passing programs. Our programming model is SPMD, with communications restricted to collective operations: scan, reduction, gather, etc. The design process in such an architecture-independent language is based on correctness-preserving transformation rules that are provable in a formal functional framework. We develop a set of design rules for composition and decomposition. For example, scan followed by reduction is replaced by a single reduction, and global reduction is decomposed into two faster operations. The impact of the design rules on the target performance is estimated analytically and tested in machine experiments. As a case study, we design two provably correct, efficient programs using the Message Passing Interface (MPI) for the famous maximum segment sum problem, starting from an intuitive, but inefficient, algorithm specification  相似文献   

20.
关联变换是MDA中模型转换的难点,研究了如何定义一套高质量的映射规则用于关联变换.首先讨论了UML中关联关系及其两种实现模式,接着对每种模式分别定义了一套从UML模型(平台独立模型)到Java模型(平台相关模型)的变换规则,最后给出了两种实现模式按规则转换的实例.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号