共查询到18条相似文献,搜索用时 15 毫秒
1.
One of the more tedious and complex tasks during the specification of conceptual schemas (CSs) is modeling the operations that define the system behavior. This paper aims to simplify this task by providing a method that automatically generates a set of basic operations that complement the static aspects of the CS and suffice to perform all typical life-cycle create/update/delete changes on the population of the elements of the CS. Our method guarantees that the generated operations are executable, i.e. their executions produce a consistent state wrt the most typical structural constraints that can be defined in CSs (e.g. multiplicity constraints). In particular, our method takes as input a CS expressed as a Unified Modeling Language (UML) class diagram (optionally defined using a profile to enrich the specification of associations) and generates an extended version of the CS that includes all necessary operations to start operating the system. If desired, these basic operations can be later used as building blocks for creating more complex ones. We show the formalization and implementation of our method by means of model-to-model transformations. Our approach is particularly relevant in the context of Model Driven Development approaches. 相似文献
2.
Anders Haug Author Vitae Lars Hvam Author Vitae Niels Henrik Mortensen Author Vitae 《Computers in Industry》2010,61(5):409-418
The use of product configurators has produced a range of benefits for several companies, such as minimizing the use of resources and shortening the lead times in product specification processes. When developing a product configurator, two kinds of models are often created, namely analysis models and design models. Since the task of describing product knowledge in analysis models involves domain experts, the analysis language has to be easily understandable in order to avoid the need for extensive training. For this task, the so-called Product Variant Master (PVM) diagramming technique is often applied. With regard to the design model, the requirements for the language focus more on a formalized and rich language, which is why class diagrams are often applied. To avoid the use of different modelling languages in the analysis and design phase, this paper proposes the layout technique ‘Vertically Aligned Class Diagrams’ (VACDs), which incorporate the usability of PVMs into class diagrams. To validate the usefulness of the VACD technique, the paper compares VACDs to PVMs and class diagrams in a utility analysis and a usability experiment. These investigations strongly indicate that VACDs maintain to a great extent the combined advantages of PVMs and normally drawn class diagrams. Thus, the use of VACDs in configurator projects has the potential to increase efficiency, improve communication and reduce errors. 相似文献
3.
Live Sequence Charts (LSC) extend Message Sequence Charts (MSC), mainly by distinguishing possible from necessary behavior.
They thus enable the specification of rich multi-modal scenario-based properties, such as mandatory, possible and forbidden
scenarios. The sequence diagrams of UML 2.0 enrich those of previous versions of UML by two new operators, assert and negate, for specifying required and forbidden behaviors, which appear to have been inspired by LSC. The UML 2.0 semantics of sequence
diagrams, however, being based on pairs of valid and invalid sets of traces, is inadequate, and prevents the new operators
from being used effectively.
We propose an extension of, and a different semantics for this UML language—Modal Sequence Diagrams (MSD)—based on the universal/existential modal semantics of LSC. In particular, in MSD assert and negate are really modalities, not operators. We define MSD as a UML 2.0 profile, thus paving the way to apply formal verification,
synthesis, and scenario-based execution techniques from LSC to the mainstream UML standard.
Preliminary version appeared in SCESM '06: Proc. of the 2006 Int.
workshop on Scenarios and State Machines, Shanghai, China (May 2006) [15]. This research was supported by the Israel Science
Foundation (grant No.287/02-1), and by The John von Neumann Minerva Center for the Development of
Reactive Systems at the Weizmann Institute of Science. 相似文献
4.
In Model‐Driven Development (MDD), detection of model defects is necessary for correct model transformations. Formal verification tools and techniques can to some extent verify models. However, scalability is a serious issue in relation to verification of complex UML/OCL class diagrams. We have proposed a model slicing technique that slices the original model into submodels to address the scalability issue. A submodel can be detected as unsatisfiable if there are no valid values for one or more attributes of an object in the diagram or if the submodel provides inconsistent conditions on the number of objects of a given type. In this paper, we propose a novel feedback technique through model slicing that detects unsatisfiable submodels and their integrity constraints among the complex hierarchy of an entire UML/OCL class diagram. The software developers can therefore focus their revision efforts on the incorrect submodels while ignoring the rest of the model. Copyright © 2013 John Wiley & Sons, Ltd. 相似文献
5.
Marcela Genero Esperanza Manso Aaron Visaggio Gerardo Canfora Mario Piattini 《Empirical Software Engineering》2007,12(5):517-549
The usefulness of measures for the analysis and design of object oriented (OO) software is increasingly being recognized in
the field of software engineering research. In particular, recognition of the need for early indicators of external quality
attributes is increasing. We investigate through experimentation whether a collection of UML class diagram measures could
be good predictors of two main subcharacteristics of the maintainability of class diagrams: understandability and modifiability.
Results obtained from a controlled experiment and a replica support the idea that useful prediction models for class diagrams
understandability and modifiability can be built on the basis of early measures, in particular, measures that capture structural
complexity through associations and generalizations. Moreover, these measures seem to be correlated with the subjective perception
of the subjects about the complexity of the diagrams. This fact shows, to some extent, that the objective measures capture
the same aspects as the subjective ones. However, despite our encouraging findings, further empirical studies, especially
using data taken from real projects performed in industrial settings, are needed. Such further study will yield a comprehensive
body of knowledge and experience about building prediction models for understandability and modifiability.
Marcela Genero is an Associate Professor in the Department of Information Systems and Technologies at the University of Castilla-La Mancha, Ciudad Real, Spain. She received her MSc degree in Computer Science from the University of South, Argentine in 1989, and her PhD at the University of Castilla-La Mancha, Ciudad Real, Spain in 2002. Her research interests include empirical software engineering, software metrics, conceptual data models quality, database quality, quality in product lines, quality in MDD, etc. She has published in prestigious journals (Journal of Software Maintenance and Evolution: Research and Practice, L’Objet, Data and Knowledge Engineering, Journal of Object Technology, Journal of Research and Practice in Information Technology), and conferences (CAISE, E/R, MODELS/UML, ISESE, OOIS, SEKE, etc). She edited the books of Mario Piattini and Coral Calero titled “Data and Information Quality” (Kluwer, 2001), and “Metrics for Software Conceptual Models” (Imperial College, 2005). She is a member of ISERN. M. Esperanza Manso is an Associate Professor in the Department of Computer Language and Systems at the University of Valladolid, Valladolid, Spain. She received her MSc degree in Mathematics from the University of Valladolid. Currently, she is working towards her PhD. Her main research interests are software maintenance, reengineering and reuse experimentation. She is an author of several papers in conferences (OOIS, CAISE, METRICS, ISESE, etc.) and book chapters. Corrado Aaron Visaggio is an Assistant Professor of Database and Software Testing at the University of Sannio, Italy. He obtained his PhD in Software Engineering at the University of Sannio. He works as a researcher at the Research Centre on Software Technology, at Benvento, Italy. His research interests include empirical software engineering, software security, software process models. He serves on the Editorial Board on the e-Informatica Journal. Gerardo Canfora is a Full Professor of Computer Science at the Faculty of Engineering and the Director of the Research Centre on Software Technology (RCOST) at the University of Sannio in Benevento, Italy. He serves on the program committees of a number of international conferences. He was a program co-chair of the 1997 International Workshop on Program Comprehension; the 2001 International Conference on Software Maintenance; the 2003 European Conference on Software Maintenance and Reengineering; the 2005 International Workshop on Principles of Software Evolution: He was the General chair of the 2003 European Conference on Software Maintenance and Reengineering and 2006 Working Conference on Reverse Engineering. Currently, he is a program co-chair of the 2007 International Conference on Software Maintenance. His research interests include software maintenance and reverse engineering, service oriented software engineering, and experimental software engineering. He was an associate editor of IEEE Transactions on Software Engineering and he currently serves on the Editorial Board of the Journal of Software Maintenance and Evolution. He is a member of the IEEE Computer Society. Mario Piattini is MSc and PhD in Computer Science by the Technical University of Madrid. Certified Information System Auditor by ISACA (Information System Audit and Control Association). Full Professor in the Department of Information Systems and Technologies at the University of Castilla-La Mancha, in Ciudad Real, Spain. Author of several books and papers on databases, software engineering and information systems. He leads the ALARCOS research group at the University of Castilla-La Mancha. 相似文献
Mario PiattiniEmail: |
Marcela Genero is an Associate Professor in the Department of Information Systems and Technologies at the University of Castilla-La Mancha, Ciudad Real, Spain. She received her MSc degree in Computer Science from the University of South, Argentine in 1989, and her PhD at the University of Castilla-La Mancha, Ciudad Real, Spain in 2002. Her research interests include empirical software engineering, software metrics, conceptual data models quality, database quality, quality in product lines, quality in MDD, etc. She has published in prestigious journals (Journal of Software Maintenance and Evolution: Research and Practice, L’Objet, Data and Knowledge Engineering, Journal of Object Technology, Journal of Research and Practice in Information Technology), and conferences (CAISE, E/R, MODELS/UML, ISESE, OOIS, SEKE, etc). She edited the books of Mario Piattini and Coral Calero titled “Data and Information Quality” (Kluwer, 2001), and “Metrics for Software Conceptual Models” (Imperial College, 2005). She is a member of ISERN. M. Esperanza Manso is an Associate Professor in the Department of Computer Language and Systems at the University of Valladolid, Valladolid, Spain. She received her MSc degree in Mathematics from the University of Valladolid. Currently, she is working towards her PhD. Her main research interests are software maintenance, reengineering and reuse experimentation. She is an author of several papers in conferences (OOIS, CAISE, METRICS, ISESE, etc.) and book chapters. Corrado Aaron Visaggio is an Assistant Professor of Database and Software Testing at the University of Sannio, Italy. He obtained his PhD in Software Engineering at the University of Sannio. He works as a researcher at the Research Centre on Software Technology, at Benvento, Italy. His research interests include empirical software engineering, software security, software process models. He serves on the Editorial Board on the e-Informatica Journal. Gerardo Canfora is a Full Professor of Computer Science at the Faculty of Engineering and the Director of the Research Centre on Software Technology (RCOST) at the University of Sannio in Benevento, Italy. He serves on the program committees of a number of international conferences. He was a program co-chair of the 1997 International Workshop on Program Comprehension; the 2001 International Conference on Software Maintenance; the 2003 European Conference on Software Maintenance and Reengineering; the 2005 International Workshop on Principles of Software Evolution: He was the General chair of the 2003 European Conference on Software Maintenance and Reengineering and 2006 Working Conference on Reverse Engineering. Currently, he is a program co-chair of the 2007 International Conference on Software Maintenance. His research interests include software maintenance and reverse engineering, service oriented software engineering, and experimental software engineering. He was an associate editor of IEEE Transactions on Software Engineering and he currently serves on the Editorial Board of the Journal of Software Maintenance and Evolution. He is a member of the IEEE Computer Society. Mario Piattini is MSc and PhD in Computer Science by the Technical University of Madrid. Certified Information System Auditor by ISACA (Information System Audit and Control Association). Full Professor in the Department of Information Systems and Technologies at the University of Castilla-La Mancha, in Ciudad Real, Spain. Author of several books and papers on databases, software engineering and information systems. He leads the ALARCOS research group at the University of Castilla-La Mancha. 相似文献
6.
Assessing the influence of stereotypes on the comprehension of UML sequence diagrams: A family of experiments 总被引:1,自引:0,他引:1
José A. Cruz-Lemus Marcela Genero Danilo Caivano Silvia Abrahão Emilio Insfrán José A. Carsí 《Information and Software Technology》2011,53(12):1391-1403
Context
The conventional wisdom states that stereotypes are used to clarify or extend the meaning of model elements and consequently should be helpful in comprehending the diagram semantics.Objective
The main goal of this work is to present a family of experiments that we have carried out to investigate whether the use of stereotypes improves the comprehension of UML sequence diagrams.Method
The family of experiments consists of an experiment and two replications carried out with 78, 29 and 36 undergraduate Computer Science students, respectively. The comprehension of UML sequence diagrams with and without stereotypes was analyzed from three different perspectives borrowed from the Cognitive Theory of Multimedia Learning (CTML): semantic comprehension, retention and transfer. In addition, we carried out a meta-analysis study to integrate the different data samples.Results
The statistical analysis and meta-analysis of the data obtained from each experiment separately indicates that the use of the proposed stereotypes helps improving the comprehension of the diagrams, especially when the subjects are not familiar with the domain.Conclusions
The set of stereotypes presented in this work seem to be helpful for a better comprehension of UML sequence diagrams, especially with not well-known domains. Although further research is necessary for strengthening these results, introducing these stereotypes both in academia and industry could be an interesting practice for checking the validity of the results. 相似文献7.
ContextThe COSMIC functional size measurement method on UML diagrams has been investigated as a means to estimate the software effort early in the software development life cycle. Like other functional size measurement methods, the COSMIC method takes into account the data movements in the UML sequence diagrams for example, but does not consider the data manipulations in the control structure. This paper explores software sizing at a finer level of granularity by taking into account the structural aspect of a sequence diagram in order to quantify its structural size. These functional and structural sizes can then be used as distinct independent variables to improve effort estimation models.ObjectiveThe objective is to design an improved measurement of the size of the UML sequence diagrams by taking into account the data manipulations represented by the structure of the sequence diagram, which will be referred to as their structural size.MethodWhile the design of COSMIC defines the functional size of a functional process at a high level of granularity (i.e. the data movements), the structural size of a sequence diagram is defined at a finer level of granularity: the size of the flow graph of their control structure described through the alt, opt and loop constructs. This new measurement method was designed by following the process recommended in Software Metrics and Software Metrology (Abran, 2010).ResultsThe size of sequence diagrams can now be measured from two perspectives, both functional and structural, and at different levels of granularity with distinct measurement units.ConclusionIt is now feasible to measure the size of functional requirements at two levels of granularity: at an abstract level, the software functional size can be measured in terms of COSMIC Function Point (CFP) units; and at a detailed level, the software structural size can be measured in terms of Control Structure Manipulation (CSM) units. These measures represent complementary aspects of software size and can be used as distinct independent variables to improve effort estimation models. 相似文献
8.
Context:The Unified Modeling Language (UML) has become the de facto standard for software modeling. UML models are often used to visualize, understand, and communicate the structure and behavior of a system. UML activity diagrams (ADs) are often used to elaborate and visualize individual use cases. Due to their higher level of abstraction and process-oriented perspective, UML ADs are also highly suitable for model-based test generation. In the last two decades, different researchers have used UML ADs for test generation. Despite the growing use of UML ADs for model-based testing, there are currently no comprehensive and unbiased studies on the topic.Objective:To present a comprehensive and unbiased overview of the state-of-the-art on model-based testing using UML ADs.Method:We review and structure the current body of knowledge on model-based testing using UML ADs by performing a systematic mapping study using well-known guidelines. We pose nine research questions, outline our selection criteria, and develop a classification scheme.Results:The results comprise 41 primary studies analyzed against nine research questions. We also highlight the current trends and research gaps in model-based testing using UML ADs and discuss some shortcomings for researchers and practitioners working in this area. The results show that the existing approaches on model-based testing using UML ADs tend to rely on intermediate formats and formalisms for model verification and test generation, employ a multitude of graph-based coverage criteria, and use graph search algorithms.Conclusion:We present a comprehensive overview of the existing approaches on model-based testing using UML ADs. We conclude that (1) UML ADs are not being used for non-functional testing, (2) only a few approaches have been validated against realistic, industrial case studies, (3) most approaches target very restricted application domains, and (4) there is currently a clear lack of holistic approaches for model-based testing using UML ADs. 相似文献
9.
Packages are important high-level organizational units for large object-oriented systems. Package-level metrics characterize the attributes of packages such as size, complexity, and coupling. There is a need for empirical evidence to support the collection of these metrics and using them as early indicators of some important external software quality attributes. In this paper, three suites of package-level metrics (Martin, MOOD and CK) are evaluated and compared empirically in predicting the number of pre-release faults and the number of post-release faults in packages. Eclipse, one of the largest open source systems, is used as a case study. The results indicate that the prediction models that are based on Martin suite are more accurate than those that are based on MOOD and CK suites across releases of Eclipse. 相似文献
10.
11.
Context
UML and XML are two of the most commonly used languages in software engineering processes. One of the most critical of these processes is that of model evolution and maintenance. More specifically, when an XML schema is modified, the changes should be propagated to the corresponding XML documents, which must conform with the new, modified schema.Objective
The goal of this paper is to provide an evolution framework by which the XML schema and documents are incrementally updated according to the changes in the conceptual model (expressed as a UML class model). In this framework, we include the transformation and evolution of UML profiles specified in UML class models because they are widely used to capture domain specific semantics.Method
We have followed a metamodeling approach which allowed us to achieve a language independent framework, not tied to the specific case of UML-XML. Besides, our proposal considers a traceability setting as a key aspect of the transformation process which allows changes to be propagated from UML class models to both XML schemas and documents.Results
As a general framework, we propose a Generic Evolution Architecture (GEA) for the model-driven engineering context. Within this architecture and for the particular case of the UML-to-XML setting, our contribution is a UML-to-XML framework that, to our knowledge, is the only approach that incorporates the following four characteristics. Firstly, the evolution tasks are carried out in a conceptual model. Secondly, our approach includes the transformation to XML of UML profiles. Thirdly, the proposal allows stereotyped UML class models to be evolved, propagating changes to XML schemas and documents in such a way that the different elements are kept in synch. Finally, we propose a traceability setting that enables evolution tasks to be performed seamlessly.Conclusions
Generic frameworks such as that proposed in this paper help to reduce the work overload experienced by software engineers in keeping different software artifacts synchronized. 相似文献12.
Pedro SánchezAuthor Vitae Manuel JiménezAuthor VitaeFrancisca RosiqueAuthor Vitae Bárbara ÁlvarezAuthor VitaeAndrés IborraAuthor Vitae 《Journal of Systems and Software》2011,84(6):1008-1021
This article presents an integrated framework for the development of home automation systems following the model-driven approach. By executing model transformations the environment allows developers to generate executable code for specific platforms. The tools presented in this work help developers to model home automation systems by means of a domain specific language which is later transformed into code for home automation specific platforms. These transformations have been defined by means of graph grammars and template engines extended with traceability capabilities. Our framework also allows the models to be reused for different applications since a catalogue of requirements is provided. This framework enables the development of home automation applications with techniques for improving the quality of both the process and the models obtained. In order to evaluate the benefits of the approach, we conducted a survey among developers that used the framework. The analysis of the outcome of this survey shows which conditions should be fulfilled in order to increase reusability. 相似文献
13.
A functional size measurement method for object-oriented conceptual schemas: design and evaluation issues 总被引:1,自引:0,他引:1
Functional Size Measurement (FSM) methods are intended to measure the size of software by quantifying the functional user requirements of the software. The capability to accurately quantify the size of software in an early stage of the development lifecycle is critical to software project managers for evaluating risks, developing project estimates and having early project indicators. In this paper, we present OO-Method Function Points (OOmFP), which is a new FSM method for object-oriented systems that is based on measuring conceptual schemas. OOmFP is presented following the steps of a process model for software measurement. Using this process model, we present the design of the measurement method, its application in a case study, and the analysis of different evaluation types that can be carried out to validate the method and to verify its application and results. 相似文献
14.
Theodore Andronikos Author Vitae Author Vitae Panayiotis Theodoropoulos Author Vitae Author Vitae George Papakonstantinou Author Vitae 《Journal of Systems and Software》2008,81(8):1389-1405
This paper describes Cronus, a platform for parallelizing general nested loops. General nested loops contain complex loop bodies (assignments, conditionals, repetitions) and exhibit uniform loop-carried dependencies. The novelty of Cronus is twofold: (1) it determines the optimal scheduling hyperplane using the QuickHull algorithm, which is more efficient than previously used methods, and (2) it implements a simple and efficient dynamic rule (successive dynamic scheduling) for the runtime scheduling of the loop iterations along the optimal hyperplane. This scheduling policy enhances data locality and improves the makespan. Cronus provides an efficient runtime library, specifically designed for communication minimization, that performs better than more generic systems, such as Berkeley UPC. Its performance was evaluated through extensive testing. Three representative case studies are examined: the Floyd-Steinberg dithering algorithm, the Transitive Closure algorithm, and the FSBM motion estimation algorithm. The experimental results corroborate the efficiency of the parallel code. The tests show speedup ranging from 1.18 (out of the ideal 4) to 12.29 (out of the ideal 16) on distributed-systems and 3.60 (out of 4) to 15.79 (out of 16) on shared-memory systems. Cronus outperforms UPC by 5-95% depending on the test case. 相似文献
15.
In this work, we proposed a novel hybrid fuzzy-sliding mode observer designed in such a manner that it can be utilized to estimate various parameters by simply using the related process model, without redesigning the structure of the whole observer. The performances and effectiveness of this hybrid observer are shown through numerical simulation based on a case study involving an ethylene polymerization process to estimate the ethylene and butene concentrations in the reactor as well as the melt flow index. It can be concluded that the proposed hybrid observer provides fast estimation with a high rate of accuracy even in the presence of disturbances and noise in the model. This hybrid observer is also compared with the sliding mode, extended Luenberger and proportional sliding mode observers to highlight its effectiveness and advantages over these observers. 相似文献
16.
Alfredo Capozucca Author Vitae Alexander Romanovsky Author Vitae 《Journal of Systems and Software》2009,82(2):207-228
This paper1 presents ways of implementing dependable distributed applications designed using the Coordinated Atomic Action (CAA) paradigm. CAAs provide a coherent set of concepts adapted to fault tolerant distributed system design that includes structured transactions, distribution, cooperation, competition, and forward and backward error recovery mechanisms triggered by exceptions. DRIP (Dependable Remote Interacting Processes) is an efficient Java implementation framework which provides support for implementing Dependable Multiparty Interactions (DMI). As DMIs have a softer exception handling semantics compared with the CAA semantics, a CAA design can be implemented using the DRIP framework. A new framework called CAA-DRIP allows programmers to exclusively implement the semantics of CAAs using the same terminology and concepts at the design and implementation levels. The new framework not only simplifies the implementation phase, but also reduces the final system size as it requires less number of instances for creating a CAA at runtime. The paper analyses both implementation frameworks in great detail, drawing a systematic comparison of the two. The CAAs behaviour is described in terms of Statecharts to better understand the differences between the two frameworks. Based on the results of the comparison, we use one of the frameworks to implement a case study belonging to the e-health domain. 相似文献
17.
Sequential diagnostic strategy (SDS) is widely used in engineering systems for fault isolation. In order to find source faults efficiently, the optimized SDS selects the most useful tests and schedules them in an optimized sequence. In this paper, a multiple-objective mathematical model for SDS optimization problem in large-scale engineering system is established, and correspondingly, a quantum-inspired genetic algorithm (QGA) specially targeted at this SDS optimization problem is developed. This QGA algorithm uses the form of probability amplitude of quantum bit to encode each possible diagnostic strategy extracted from fault-test dependency matrix, and then goes through evolutionary process to find the optimal strategy considering dual objectives of the expected testing cost and the number of contributing tests. Crossover and mutation operations are combined with quantum encoding in this algorithm to expand the diversity of population within a small population size and to increase the possibility of obtaining the global optimum. A case of control moment gyro system from real practice is used to verify the effectiveness of this algorithm, and a comparative study with two conventional intelligent optimization algorithms proposed for this problem, PSO and genetic algorithm, are presented to reveal its advantages. 相似文献
18.
Alessandro Jatoba Amauri M. da Cunha Mario C. R. Vidal Catherine M. Burns Paulo V. R. de Carvalho 《人机工程学与制造业中的人性因素》2019,29(1):63-77
This paper aims at presenting a case study on the use of human factors and ergonomics to enhance requirement specifications for complex sociotechnical system support tools through enhancing the understanding of human performance within the business domain and the indication of high‐value requirements candidates to information technology support. This work uses methods based on cognitive engineering to build representations of the business domain, highlighting workers’ needs, and contributing to the improvement of software requirements specifications, used in the healthcare domain. As the human factors discipline fits between human sciences and technology design, we believe that its concepts can be combined with software engineering to improve understanding of how people work, enabling the design of better information technology. 相似文献