首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 359 毫秒
1.
Goal-oriented and agent-oriented modelling provides an effective approach to the understanding of distributed information systems that need to operate in open, heterogeneous and evolving environments. Frameworks, firstly introduced more than ten years ago, have been extended along language variants, analysis methods and CASE tools, posing language semantics and tool interoperability issues. Among them, the i* framework is one the most widespread. We focus on i*-based modelling languages and tools and on the problem of supporting model exchange between them. In this paper, we introduce the i* interoperability problem and derive an XML interchange format, called iStarML, as a practical solution to this problem. We first discuss the main requirements for its definition, then we characterise the core concepts of i* and we detail the tags and options of the interchange format. We complete the presentation of iStarML showing some possible applications. Finally, a survey on the i* community perception about iStarML is included for assessment purposes.  相似文献   

2.
Fragmentation of information across instances of different metamodels poses a significant problem for software developers and leads to a major increase in effort of transformation development. Moreover, compositions of metamodels tend to be incomplete, imprecise, and erroneous, making it impossible to present it to users or use it directly as input for applications. Customized views satisfy information needs by focusing on a particular concern, and filtering out information that is not relevant to this concern. For a broad establishment of view-based approaches, an automated solution to deal with separate metamodels and the high complexity of model transformations is necessary. In this paper, we present the ModelJoin approach for the rapid creation of views. Using a human-readable textual DSL, developers can define custom views declaratively without having to write model transformations or define a bridging metamodel. Instead, a metamodel generator and higher-order transformations create annotated target metamodels and the appropriate transformations on-the-fly. The resulting views, which are based on these metamodels, contain joined instances and can effectively express concerns unforseen during metamodel design. We have applied the ModelJoin approach and validated the textual DSL in a case study using the Palladio Component Model.  相似文献   

3.
《Computers in Industry》2014,65(9):1291-1300
Nowadays, Internet technologies and standards are being systematically used by enterprises as tools to provide an infrastructure to connect people, enterprises, and applications they are using. In such complex networked enterprises, it is increasingly challenging to interchange, share, and manage internal and external digital information. In this context, to achieve interoperability between information systems is a challenging task. In order to solve the interoperability problem at semantic level, several ontology-based approaches have emerged. Although methodologies, methods, techniques, and tools to support the ontology building process were proposed, there are no mature models to measure this process, and the quality of implemented ontologies remains a major concern. This paper presents a framework, OntoQualitas, for evaluating the quality of an ontology whose purpose is the information interchange between different contexts. OntoQualitas includes previous and new measures to evaluate the ontology considering its specific purpose. Additionally, an empirical validation of OntoQualitas is presented.  相似文献   

4.
Biological system models are routinely developed in modern systems biology research following appropriate modelling/experiment design cycles. Frequently these take the form of high-dimensional nonlinear Ordinary Differential Equations that integrate information from several sources; they usually contain multiple time-scales making them difficult even to simulate. These features make systems analysis (understanding of robust functionality) — or redesign (proposing modifications in order to improve or modify existing functionality) a particularly hard problem. In this paper we use concepts from systems theory to develop two complementary tools that can help us understand the complex behaviour of such system models: one based on model decomposition and one on model reduction. Our aim is to algorithmically produce biologically meaningful, simplified models, which can then be used for further analysis and design. The tools presented are applied on a model of the Epidermal Growth Factor signalling pathway.  相似文献   

5.
During the last 10 years, many organizations have invested resources and energy in order to be rated at the highest level as possible according to some maturity models for software development. Since measures play an important role in these models, it is essential that CASE tools offer facilities to automatically measure the sizes of various documents produced using them. This paper introduces a tool, called μcROSE, that automatically measures the functional software size, as defined by the COSMIC-FFP method, for Rational Rose RealTime models. μcROSE streamlines the measurement process, ensuring repeatability and consistency in measurement while reducing measurement cost. It is the first tool to address automatic measurement of COSMIC-FFP and it can be integrated into the Rational Rose RealTime toolset.  相似文献   

6.
一、引言 CASE数据交换模型是CASE环境中工具间进行数据信息交换的关键。为了使ASE工具能够在控制集成、数据集成、显示集成这三方面成为一个整体,工具间的数据交换和讯是必不可少的。为了能有效地实现这种通讯机制,势必要建立一个数据交换模型.  相似文献   

7.
In this paper, we consider ANSI C program slicing using XML (Extensible Markup Language). Our goal is to build a flexible, useful and uniform data interchange format for CASE tools, which is a key issue to make it much easier to develop CASE tools such as program slicers. Although XML has a great potential for such data interchange formats, we first point out that there are still a lot of challenging problems to be solved. Then, as a first step to our goal, we introduce ACML (ANSI C Markup Language), which describes the syntactic structure and static semantics for ANSI C code. In our preliminary experiment, we had a good result; it took only 0.5 man-month to implement Weiser's slicer based on ACML, whereas it took about 2 man-months to implement an ANSI C parser and static semantics analyzer of XCI (Experimental C Interpreter).  相似文献   

8.
Ensemble of metamodels with optimized weight factors   总被引:4,自引:2,他引:2  
Approximate mathematical models (metamodels) are often used as surrogates for more computationally intensive simulations. The common practice is to construct multiple metamodels based on a common training data set, evaluate their accuracy, and then to use only a single model perceived as the best while discarding the rest. This practice has some shortcomings as it does not take full advantage of the resources devoted to constructing different metamodels, and it is based on the assumption that changes in the training data set will not jeopardize the accuracy of the selected model. It is possible to overcome these drawbacks and to improve the prediction accuracy of the surrogate model if the separate stand-alone metamodels are combined to form an ensemble. Motivated by previous research on committee of neural networks and ensemble of surrogate models, a technique for developing a more accurate ensemble of multiple metamodels is presented in this paper. Here, the selection of weight factors in the general weighted-sum formulation of an ensemble is treated as an optimization problem with the desired solution being one that minimizes a selected error metric. The proposed technique is evaluated by considering one industrial and four benchmark problems. The effect of different metrics for estimating the prediction error at either the training data set or a few validation points is also explored. The results show that the optimized ensemble provides more accurate predictions than the stand-alone metamodels and for most problems even surpassing the previously reported ensemble approaches.  相似文献   

9.
The study of critical infrastructure systems organization and behavior has drawn great attention in the recent years. This is in part due to their great influence on the ordinary life of every citizen. In this paper, we study critical infrastructures’ characteristics and propose a reference model based on the Unified Modeling Language (UML). This reference model attempts to provide suitable means for the task of modeling an infrastructure system through offering five major metamodels. We introduce each of these metamodels and explain how it is possible to integrate them into a unique representation to characterize various aspects of an infrastructure system. Based on the metamodels of UML-CI, infrastructure system knowledge bases can be built to aid the process of infrastructure system modeling, profiling, and management.  相似文献   

10.
11.

Metamodels play a crucial role in any model-based application. They underpin the definition of models and tools, and the development of model management operations, including model transformations and analysis. Like any software artifacts, metamodels are subject to evolution to improve their quality or implement unforeseen requirements. Metamodels can be defined in terms of existing ones to increase the separation of concerns and foster reuse. However, the induced coupling can give additional evolution complexity, and dedicated support is needed to avoid breaking metamodels defined in terms of those being changed. This paper presents a tool-supported approach that can automatically analyze the available metamodels and alert modelers in case of change operations that can give place to invalid situations like dangling references. The approach has been implemented in the Edelta development environment and successfully applied to metamodels retrieved from a publicly available Ecore models dataset.

  相似文献   

12.
CM-Builder: A Natural Language-Based CASE Tool for Object-Oriented Analysis   总被引:3,自引:0,他引:3  
Graphical CASE (Computer Aided Software Engineering) tools provide considerable help in documenting the output of the Analysis and Design stages of software development and can assist in detecting incompleteness and inconsistency in an analysis. However, these tools do not contribute to the initial, difficult stage of the analysis process, that of identifying the object classes, attributes and relationships used to model the problem domain. This paper describes an NL-Based CASE tool called Class Model Builder (CM-Builder) which aims at supporting this aspect of the Analysis stage of software development in an Object-Oriented framework. CM-Builder uses robust Natural Language Processing techniques to analyse software requirements texts written in English and constructs, either automatically or interactively with an analyst, an initial UML Class Model representing the object classes mentioned in the text and the relationships among them. The initial model can be directly input to a graphical CASE tool for further refinement by a human analyst. CM-Builder has been quantitatively evaluated in blind trials against a collection of unseen software requirements texts and we present the results of this evaluation, together with the evaluation method. The results are very encouraging and demonstrate that tools such as CM-Builder have the potential to play an important role in the software development process.  相似文献   

13.
14.
Metamodels are often used to replace expensive simulations of engineering problems. When a training set is given, a series of metamodels can be constructed, and then there are two strategies to deal with these metamodels: (1) picking out the best one with the highest accuracy as an approximation of the computationally intensive simulation; and (2) combining all of them into an ensemble model. However, since the choice of approximate model depends on design of experiments (DOEs), employing of the first strategy thus increases the risk of adopting an inappropriate model. Nevertheless, the second strategy also seems not to be a good choice, since adding redundant metamodels may lead to loss of accuracy. Therefore, it is a necessary step to eliminate the redundant metamodels from the set of the candidates before constructing the final ensemble. Illuminated by the method of variable selection widely used in polynomial regression, a metamodel selection method based on stepwise regression is proposed. In our method, just a subset of n ones (np, where p is the number of all of the candidate metamodels) is used. In addition, a new ensemble technique is proposed from the view of polynomial regression in this work. This new ensemble technique, combined with metamodel selection method, has been evaluated using six benchmark problems. The results show that eliminating the redundant metamodels before constructing the ensemble can provide more ideal prediction accuracy than directly constructing the ensemble by utilizing all of the candidates.  相似文献   

15.
Enterprises use enterprise models to represent and analyse their processes, products, decisions, organisation, information flows, etc. Nevertheless, the enterprise knowledge that exists in enterprise models is not used beyond these purposes. The main goal of this paper is to present a framework that allows enterprises to reuse enterprise models to build software. The framework includes these dimensions: (1) a methodology that guides the use of the other dimensions in the reutilisation of enterprise models in software generation; (2) a set of metamodels to represent enterprises at the Computation Independent Model (CIM) level; (3) a modelling guide to make enterprise models using the metamodels proposed in this paper; (4) an extraction algorithm to discriminate the part of the CIM model to reuse; and (5) a set of transformation rules to reuse enterprise models to build Platform Independent Models. In addition, a case example is shown to validate the work that was carried out and to identify limitations.  相似文献   

16.

Context

Model-driven approaches deal with the provision of models, transformations between them and code generators to address software development. This approach has the advantage of defining a conceptual structure, where the models used by business managers and analysts can be mapped into more detailed models used by software developers. This alignment between high-level business specifications and the lower-level information technologies (ITs) models is crucial to the field of service-oriented development, where meaningful business services and process specifications are those relevant to real business scenarios.

Objective

This paper presents a model-driven approach which, starting from high-level computational-independent business models (CIMs) - the business view - sets out guidelines for obtaining lower-level platform-independent behavioural models (PIMs) - the information system view. A key advantage of our approach is the use of real high-level business models, not just requirements models, which, by means of model transformations, helps software developers to make the most of the business knowledge for specifying and developing business services.

Method

This proposal is framed in a method for service-oriented development of information systems whose main characteristic is the use of services as first-class objects. The method follows an MDA-based approach, proposing a set of models at different levels of abstraction and model transformations to connect them.

Results

The paper present the complete set of CIM and PIM metamodels and the specification of the mappings between them, which clear advantage is the support for the alignment between high-level business view and ITs. The proposed model-driven process is being implemented in an MDA tool. A first prototype has been used to develop a travel agency case study that illustrates the proposal.

Conclusion

This study shows how a model-driven approach helps to solve the alignment problem between the business view and the information system view that arises when adopting service-oriented approaches for software development.  相似文献   

17.
Model differentiation techniques, which provide the capability to identify mappings and differences between models, are essential to many model development and management practices. There has been initial research toward model differentiation applied to Unified Modeling Language (UML) diagrams, but differentiation of domain-specific models has not been explored deeply in the modeling community. Traditional modeling practice using the UML relies on a single fixed general-purpose language (i.e., all UML diagrams conform to a single metamodel). In contrast, Domain-Specific Modeling (DSM) is an emerging model-driven paradigm in which multiple metamodels are used to define various modeling languages that represent the key concepts and abstractions for particular domains. Therefore, domain-specific models may conform to various metamodels, which requires model differentiation algorithms be metamodel-independent and able to apply to multiple domain-specific modeling languages. This paper presents metamodel-independent algorithms and associated tools for detecting mappings and differences between domain-specific models, with facilities for graphical visualization of the detected differences.  相似文献   

18.
Kemerer  C.F. 《Software, IEEE》1992,9(3):23-28
Part of adopting an industrial process is to go through a learning curve that measures the rate at which the average unit cost of production decreases as the cumulative amount produced increases. It is argued that organizations buy integrated CASE tools only to leave them on the shelf because they misinterpret the learning curve and its effect on productivity. It is shown that learning-curve models can quantitatively document the productivity effect of integrated CASE tools by factoring out the learning costs so that managers can use model results to estimate future projects with greater accuracy. Without this depth of understanding, managers are likely to make less-than-optimal decisions about integrated CASE and may abandon the technology too soon. The influence of learning curves on CASE tools and the adaptation of learning-curve models to integrate CASE are discussed. The three biggest tasks in the implementation of learning-curves in integrated CASE settings, locating a suitable data site, collecting the data, and validating the results, are also discussed  相似文献   

19.
Context-aware recommendation algorithms focus on refining recommendations by considering additional information, available to the system. This topic has gained a lot of attention recently. Among others, several factorization methods were proposed to solve the problem, although most of them assume explicit feedback which strongly limits their real-world applicability. While these algorithms apply various loss functions and optimization strategies, the preference modeling under context is less explored due to the lack of tools allowing for easy experimentation with various models. As context dimensions are introduced beyond users and items, the space of possible preference models and the importance of proper modeling largely increases. In this paper we propose a general factorization framework (GFF), a single flexible algorithm that takes the preference model as an input and computes latent feature matrices for the input dimensions. GFF allows us to easily experiment with various linear models on any context-aware recommendation task, be it explicit or implicit feedback based. The scaling properties makes it usable under real life circumstances as well. We demonstrate the framework’s potential by exploring various preference models on a 4-dimensional context-aware problem with contexts that are available for almost any real life datasets. We show in our experiments—performed on five real life, implicit feedback datasets—that proper preference modelling significantly increases recommendation accuracy, and previously unused models outperform the traditional ones. Novel models in GFF also outperform state-of-the-art factorization algorithms. We also extend the method to be fully compliant to the Multidimensional Dataspace Model, one of the most extensive data models of context-enriched data. Extended GFF allows the seamless incorporation of information into the factorization framework beyond context, like item metadata, social networks, session information, etc. Preliminary experiments show great potential of this capability.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号