首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper conducts an empirical study that explores the differences between adopting a traditional conceptual modeling (TCM) technique and an ontology-driven conceptual modeling (ODCM) technique with the objective to understand and identify in which modeling situations an ODCM technique can prove beneficial compared to a TCM technique. More specifically, we asked ourselves if there exist any meaningful differences in the resulting conceptual model and the effort spent to create such model between novice modelers trained in an ontology-driven conceptual modeling technique and novice modelers trained in a traditional conceptual modeling technique. To answer this question, we discuss previous empirical research efforts and distill these efforts into two hypotheses. Next, these hypotheses are tested in a rigorously developed experiment, where a total of 100 students from two different Universities participated. The findings of our empirical study confirm that there do exist meaningful differences between adopting the two techniques. We observed that novice modelers applying the ODCM technique arrived at higher quality models compared to novice modelers applying the TCM technique. More specifically, the results of the empirical study demonstrated that it is advantageous to apply an ODCM technique over an TCM when having to model the more challenging and advanced facets of a certain domain or scenario. Moreover, we also did not find any significant difference in effort between applying these two techniques. Finally, we specified our results in three findings that aim to clarify the obtained results.  相似文献   

2.
Computational models of complex systems are usually elaborate and sensitive to implementation details, characteristics which often affect their verification and validation. Model replication is a possible solution to this issue. It avoids biases associated with the language or toolkit used to develop the original model, not only promoting its verification and validation, but also fostering the credibility of the underlying conceptual model. However, different model implementations must be compared to assess their equivalence. The problem is, given two or more implementations of a stochastic model, how to prove that they display similar behavior? In this paper, we present a model comparison technique, which uses principal component analysis to convert simulation output into a set of linearly uncorrelated statistical measures, analyzable in a consistent, model-independent fashion. It is appropriate for ascertaining distributional equivalence of a model replication with its original implementation. Besides model-independence, this technique has three other desirable properties: a) it automatically selects output features that best explain implementation differences; b) it does not depend on the distributional properties of simulation output; and, c) it simplifies the modelers’ work, as it can be used directly on simulation outputs. The proposed technique is shown to produce similar results to the manual or empirical selection of output features when applied to a well-studied reference model.  相似文献   

3.
Model transformation is a key enabling technology of Model-Driven Engineering (MDE). Existing model transformation languages are shaped by and for MDE practitioners—a user group with needs and capabilities which are not necessarily characteristic of modelers in general. Consequently, these languages are largely ill-equipped for adoption by end-user modelers in areas such as requirements engineering, business process management, or enterprise architecture. We aim to introduce a model transformation language addressing the skills and requirements of end-user modelers. With this contribution, we hope to broaden the application scope of model transformation and MDE technology in general. We discuss the profile of end-user modelers and propose a set of design guidelines for model transformation languages addressing them. We then introduce Visual Model Transformation Language (VMTL) following these guidelines. VMTL draws on our previous work on the usability-oriented Visual Model Query Language. We implement VMTL using the Henshin model transformation engine, and empirically investigate its learnability via two user experiments and a think-aloud protocol analysis. Our experiments, although conducted on computer science students exhibiting only some of the characteristics of end-user modelers, show that VMTL compares favorably in terms of learnability with two state-of the-art model transformation languages: Epsilon and Henshin. Our think-aloud protocol analysis confirms many of the design decisions adopted for VMTL, while also indicating possible improvements.  相似文献   

4.
Modelers involved in environmental policy assessments are commonly confronted with the lack of uptake of model output by policy actors. Actors have different expectations of models, condensed into three quality criteria: credibility, salience, and legitimacy. The fulfilment of quality criteria is also dynamic as expectations vary, change, and possibly counteract each other. We present a checklist for modelers involved in model-based assessments that is aimed at the identification and monitoring of issues, limitations and trade-offs regarding model quality criteria. It draws upon the literature of integrated assessments as well as case study analysis of environmental policy assessments for the Dutch government, based on expert interviews and embedded experience. The checklist is intended to be consulted during assessments; its application may result in greater awareness among modelers involved in assessments regarding model quality criteria, and may positively affect the uptake of model-based knowledge from environmental policy assessments by policy actors.  相似文献   

5.
Much effort has been devoted to the development and empirical validation of object-oriented metrics. The empirical validations performed thus far would suggest that a core set of validated metrics is close to being identified. However, none of these studies allow for the potentially confounding effect of class size. We demonstrate a strong size confounding effect and question the results of previous object-oriented metrics validation studies. We first investigated whether there is a confounding effect of class size in validation studies of object-oriented metrics and show that, based on previous work, there is reason to believe that such an effect exists. We then describe a detailed empirical methodology for identifying those effects. Finally, we perform a study on a large C++ telecommunications framework to examine if size is really a confounder. This study considered the Chidamber and Kemerer metrics and a subset of the Lorenz and Kidd metrics. The dependent variable was the incidence of a fault attributable to a field failure (fault-proneness of a class). Our findings indicate that, before controlling for size, the results are very similar to previous studies. The metrics that are expected to be validated are indeed associated with fault-proneness  相似文献   

6.
We present an empirical validation of object-oriented size estimation models. In previous work we proposed object oriented function points (OOFP), an adaptation of the function points approach to object-oriented systems. In a small pilot study, we used the OOFP method to estimate lines of code (LOC). In this paper we extend the empirical validation of OOFP substantially, using a larger data set and comparing OOFP with alternative predictors of LOC. The aim of the paper is to gain an understanding of which factors contribute to accurate size prediction for OO software, and to position OOFP within that knowledge. A cross validation approach was adopted to build and evaluate linear models where the independent variable was either a traditional OO entity (classes, methods, association, inheritance, or a combination of them) or an OOFP-related measure. Using the full OOFP process, the best size predictor achieved a normalized mean squared error of 38%. By removing function point weighting tables from the OOFP process, and carefully analyzing collected data points and developer practices, we identified several factors that influence size estimation. Our empirical evidence demonstrates that by controlling these factors size estimates could be substantially improved, decreasing the normalized mean squared error to 15%—in relative terms, a 56% reduction.  相似文献   

7.
8.
The layout of a business process model influences how easily it can be understood. Existing layout features in process modeling tools often rely on graph representations, but do not take the specific properties of business process models into account. In this paper, we propose an algorithm that is based on a set of constraints which are specifically identified toward establishing a readable layout of a process model. Our algorithm exploits the structure of the process model and allows the computation of the final layout in linear time. We explain the algorithm, show its detailed run-time complexity, compare it to existing algorithms, and demonstrate in an empirical evaluation the acceptance of the layout generated by the algorithm. The data suggests that the proposed algorithm is well perceived by moderately experienced process modelers, both in terms of its usefulness as well as its ease of use.  相似文献   

9.
Many of the best practices concerning the development of ecological models or analytic techniques published in the scientific literature are not fully available to modelers but rather are stored in scientists' digital or biological memories. We propose that it is time to address the problem of storing, documenting, and executing ecological models and analytical procedures. In this paper, we propose a conceptual framework to design and implement a web application that will help to meet this challenge. This tool will foster cooperation among scientists, enhancing the creation of relevant knowledge that could be transferred to environmental managers. We have implemented this conceptual framework in a tool called ModeleR. This is being used to document, share, and execute more than 200 models and analytical processes associated with a global change monitoring program that is being undertaken in the Sierra Nevada Mountains (south Spain). ModeleR uses the concept of scientific workflow to connect and execute different types of models and analytical processes. Finally, we have envisioned the creation of a federation of model repositories where models documented within a local repository could be linked and even executed by other researchers.  相似文献   

10.
Abstract. An empirical investigation into the validation process within requirements determination is described in which systems analysts were asked to complete a questionnaire concerning important validation issues. We describe the major validation activities, a set of major problems experienced by the respondents, factors affecting the process and hypotheses for problem explanations. The levels of experience of the respondents and the organizations for which they work appear to be significant.
Analysts employ a very traditional approach, expressing the specification mainly in English, and they experience problems in using over-formal notations in informal situations with users, as well as problems in deriving full benefit from notations when building the specification and detecting its properties. Not all of the specification is validated and tool use is not widespread and does not appear to be effective.
We define the concepts of formal and informal view, and suggest that method and tool use will not necessarily increase in organizations as it is apparent that research into the more effective application of formal notations is necessary. In addition, it is clear that the factors that affect the validation process are not only technical, but individual and organizational, necessitating the development of suitable informal activities which take these factors into account.  相似文献   

11.
Geometry modeling has recently emerged as a commodity capability. Several geometry modeling engines are available which provide largely the same capability, ad most high-end CAD systems provide access to their geometry through APIs. However, subtle differences exist between these modelers, both at the syntax level and in the underlying topological models. A modeler-independent interface to geometry bridges these differences, allowing applications to be developed in a true modeler- independent manner. The Common Geometry Module, or CGM, provides such an interface to geometry. At the most basic level, CGM translates geometry function calls to access geometry in its native format. To smooth over topological differences between modelers, and to allow modeler-independent modification of topology, CGM maintains its own topology datastructure. CGM also provides functionality not found in most modelers, like support for non-manifold topology, and alternative representations, including facet-based and ‘virtual’ geometry. CGM is designed to be extensible, allowing applications to derive application-specific capabilities from topological entities defined in CGM. The CUBIT Mesh Generation Toolkit has been modified to work directly with CGM. CGM is also designed to simplify the implementation of other solid model-based or alternative representations of geometry. Ports to Solid Works and Pro/Engineer are underway. CGM is also being used as the foundation for parallel mesh generation and is being used for geometry support in several advanced finite element analysis codes.  相似文献   

12.
While conceptual modeling is strongly related to the final quality of the software product, conceptual modeling itself remains a challenging activity. In particular, modelers must ensure that conceptual models properly formalize their intended conceptualization of a domain. This paper proposes an approach to facilitate the validation process of conceptual models defined in OntoUML by transforming these models into specifications in the logic-based language Alloy and using its analyzer to generate instances of the model and assertion counter-examples. By allowing the observation of sequences of snapshots of model instances, the dynamics of object creation, classification, association and destruction are revealed. This confronts the modeler with the implications of modeling choices and allows them to uncover mistakes or gain confidence in the quality of conceptual models.  相似文献   

13.
The main objective of this technical note is to derive a simple necessary and sufficient condition for a linear fractional transformation (LFT) perturbed model set being consistent with frequency domain plant input-output data. Only discrete-time models and unstructured modeling errors are dealt with. Compared with the available results in which the eigenvalues of a matrix are involved, this condition is related only to the Euclidean norms of two vectors. Moreover, these vectors linearly depend on measurement errors. Some of its applications to model set validation have been briefly discussed. Based on this condition, an almost analytic solution has been established for model set validation under a deterministic framework when the measurement errors are energy bounded. Numerical simulations show that this consistency condition can lead to a significant computation cost reduction  相似文献   

14.
Cluster validation is a major issue in cluster analysis of data mining, which is the process of evaluating performance of clustering algorithms under varying input conditions. Many existing validity indices address clustering results of low-dimensional data. Within high-dimensional data, many of the dimensions are irrelevant, and the clusters usually only exist in some projected subspaces spanned by different combinations of dimensions. This paper presents a solution to the problem of cluster validation for projective clustering. We propose two new measurements for the intracluster compactness and intercluster separation of projected clusters. Based on these measurements and the conventional indices, three new cluster validity indices are presented. Combined with a fuzzy projective clustering algorithm, the new indices are used to determine the number of projected clusters in high-dimensional data. The suitability of our proposal has been demonstrated through an empirical study using synthetic and real-world datasets.  相似文献   

15.
Agent-based modeling (ABM) techniques for studying human-technical systems face two important challenges. First, agent behavioral rules are often ad hoc, making it difficult to assess the implications of these models within the larger theoretical context. Second, the lack of relevant empirical data precludes many models from being appropriately initialized and validated, limiting the value of such models for exploring emergent properties or for policy evaluation. To address these issues, in this paper we present a theoretically-based and empirically-driven agent-based model of technology adoption, with an application to residential solar photovoltaic (PV). Using household-level resolution for demographic, attitudinal, social network, and environmental variables, the integrated ABM framework we develop is applied to real-world data covering 2004–2013 for a residential solar PV program at the city scale. Two applications of the model focusing on rebate program design are also presented.  相似文献   

16.
Organizations that adopt process modeling often maintain several co-existing models of the same business process. These models target different abstraction levels and stakeholder perspectives. Maintaining consistency among these models has become a major challenge for such organizations. Although several academic works have discussed this challenge, little empirical investigation exists on how people perform process model consistency management in practice. This paper aims to address this lack by presenting an in-depth empirical study of a business-driven engineering process deployed at a large company in the banking sector. We analyzed more than 70 business process models developed by the company, including their change history, with over 1,000 change requests. We also interviewed 9 business and IT practitioners and surveyed 23 such practitioners to understand concrete difficulties in consistency management, the rationales for the specification-to-implementation refinements found in the models, strategies that the practitioners use to detect and fix inconsistencies, and how tools could help with these tasks. Our contribution is a set of eight empirical findings, some of which confirm or contradict previous works on process model consistency management found in the literature. The findings provide empirical evidence of (1) how business process models are created and maintained, including a set of recurrent patterns used to refine business-level process specifications into IT-level models; (2) what types of inconsistencies occur; how they are introduced; and what problems they cause; and (3) what stakeholders expect from tools to support consistency management.  相似文献   

17.
Business process modeling is heavily applied in practice, but important quality issues have not been addressed thoroughly by research. A notorious problem is the low level of modeling competence that many casual modelers in process documentation projects have. Existing approaches towards model quality might be of benefit, but they suffer from at least one of the following problems. On the one hand, frameworks like SEQUAL and the Guidelines of Modeling are too abstract to be applicable for novices and non-experts in practice. On the other hand, there are collections of pragmatic hints that lack a sound research foundation. In this paper, we analyze existing research on relationships between model structure on the one hand and error probability and understanding on the other hand. As a synthesis we propose a set of seven process modeling guidelines (7PMG). Each of these guidelines builds on strong empirical insights, yet they are formulated to be intuitive to practitioners. Furthermore, we analyze how the guidelines are prioritized by industry experts. In this regard, the seven guidelines have the potential to serve as an important tool of knowledge transfer from academia into modeling practice.  相似文献   

18.
There are increasing calls to audit decision-support models used for environmental policy to ensure that they correspond with the reality facing policy makers. Modelers can establish correspondence by providing empirical evidence of real-world behavior that their models skillfully simulate. Since real-world behavior—especially in environmental systems—is often complex, credibly modeling underlying dynamics is essential. We present a pre-modeling diagnostic framework based on Nonlinear Time Series (NLTS) methods for reconstructing real-world environmental dynamics from observed data. The framework is illustrated with a case study of saltwater intrusion into coastal wetlands in Everglades National Park, Florida, USA. We propose that environmental modelers test for systematic dynamic behavior in observed data before resorting to conventional stochastic exploratory approaches unable to detect this valuable information. Reconstructed data dynamics can be used, along with other expert information, as a rigorous benchmark to guide specification and testing of environmental decision-support models corresponding with real-world behavior.  相似文献   

19.
This paper seeks to develop and validate a measurement scale of perceived quality in the online media (e-SQ-Media), and to explore the influence of perceived quality on satisfaction and loyalty in the online media. Firstly, an explanation of the main attributes of the concepts examined is provided, with special attention being paid to the multidimensional nature of the variables and the relationships between them. This is followed by an examination of the validation processes of the measuring instruments. The validation process of scales suggested that the quality of service in online media is defined by a construct composed of four dimensions: efficiency; system availability; reliability and privacy; and interaction. The model, validated by means of structural equations, provides empirical evidence of the positive link between the quality dimensions proposed in the model and the constructs of satisfaction and loyalty. Most relevant studies regarding quality in online media have focused in the identification of a set of indicators without taking into account aspects relating to perceived quality. The fusion of the two areas of knowledge (online media and e-service quality) may lead to the creation of a scale that takes advantage of the most beneficial features that provide us with the different areas of study with the aim of obtaining as balanced and convergent a model as possible.  相似文献   

20.
The development and validation of fault-tolerant computers for critical real-time applications are currently both costly and time consuming. Often, the underlying technology is out-of-date by the time the computers are ready for deployment. Obsolescence can become a chronic problem when the systems in which they are embedded have lifetimes of several decades. This paper gives an overview of the work carried out in a project that is tackling the issues of cost and rapid obsolescence by defining a generic fault-tolerant computer architecture based essentially on commercial off-the-shelf (COTS) components (both processor hardware boards and real-time operating systems). The architecture uses a limited number of specific, but generic, hardware and software components to implement an architecture that can be configured along three dimensions: redundant channels, redundant lanes, and integrity levels. The two dimensions of physical redundancy allow the definition of a wide variety of instances with different fault tolerance strategies. The integrity level dimension allows application components of different levels of criticality to coexist in the same instance. The paper describes the main concepts of the architecture, the supporting environments for development and validation, and the prototypes currently being implemented  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号