首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Context

Specification matching techniques are crucial for effective retrieval processes. Despite the prevalence for object-oriented methodologies, little attention has been given to Unified Modeling Language (UML) for matching.

Objective

This paper presents a two-stage framework for matching two UML specifications and quantifying the results based on the systematic integration of their structural and behavioral similarities in order to identify the candidate component set for reuse.

Method

The first stage in the framework is an evaluation of the similarities between UML class diagrams using the Structure-Mapping Engine (SME), a simulation of the analogical reasoning approach known as the structure-mapping theory. The second stage, performed on the components identified in the first stage, is based on a graph-similarity scoring algorithm in which UML class diagrams and sequence diagrams are transformed into an SME representation and a Message-Object-Order Graph (MOOG). The effectiveness of the proposed framework was evaluated using a case study.

Results

The experimental results showed a reduction in potential mismatches and an overall high precision and recall.

Conclusion

It is concluded that the two-stage framework is capable of performing more precise matching compared to those of other single-stage matching frameworks. Moreover, the two-stage framework could be utilized within a reuse process, bypassing the need for extra information for retrieval of the components described by UML.  相似文献   

2.
Due to the increase of XML-based applications, XML schema design has become an important task. One approach is to consider conceptual schemas as a basis for generating XML documents compliant to consensual information of specific domains. However, the conversion of conceptual schemas to XML schemas is not a straightforward process and inconvenient design decisions can lead to a poor query processing on XML documents generated. This paper presents a conversion approach which considers data and query workload estimated for XML applications, in order to generate an XML schema from a conceptual schema. Load information is used to produce XML schemas which can respond well to the main queries of an XML application. We evaluate our approach through a case study carried out on a native XML database. The experimental results demonstrate that the XML schemas generated by our methodology contribute to a better query performance than related approaches.
Ronaldo dos Santos MelloEmail:
  相似文献   

3.

Context

The constant changes in today’s business requirements demand continuous database revisions. Hence, database structures, not unlike software applications, deteriorate during their lifespan and thus require refactoring in order to achieve a longer life span. Although unit tests support changes to application programs and refactoring, there is currently a lack of testing strategies for database schema evolution.

Objective

This work examines the challenges for database schema evolution and explores the possibility of using various testing strategies to assist with schema evolution. Specifically, the work proposes a novel unit test approach for the application code that accesses databases with the objective of proactively evaluating the code against the altered database.

Method

The approach was validated through the implementation of a testing framework in conjunction with a sample application and a relatively simple database schema. Although the database schema in this study was simple, it was nevertheless able to demonstrate the advantages of the proposed approach.

Results

After changes in the database schema, the proposed approach found all SELECT statements as well as the majority of other statements requiring modifications in the application code. Due to its efficiency with SELECT statements, the proposed approach is expected to be more successful with database warehouse applications where SELECT statements are dominant.

Conclusion

The unit test approach that accesses databases has proven to be successful in evaluating the application code against the evolved database. In particular, the approach is simple and straightforward to implement, which makes it easily adoptable in practice.  相似文献   

4.

Context

Data warehouse conceptual design is based on the metaphor of the cube, which can be derived from either requirement-driven or data-driven methodologies. Each methodology has its own advantages. The first allows designers to obtain a conceptual schema very close to the user needs but it may be not supported by the effective data availability. On the contrary, the second ensures a perfect traceability and consistence with the data sources—in fact, it guarantees the presence of data to be used in analytical processing—but does not preserve from missing business user needs. To face this issue, the necessity emerged in the last years to define hybrid methodologies for conceptual design.

Objective

The objective of the paper is to use a hybrid methodology based on different multidimensional models in order to gather all advantages of each of them.

Method

The proposed methodology integrates the requirement-driven strategy with the data-driven one, in that order, possibly performing alterations of functional dependencies on UML multidimensional schemas reconciled with data sources.

Results

As case study, we illustrate how our methodology can be applied to the university environment. Furthermore, we evaluate quantitatively the benefits of this methodology by comparing it with some popular and conventional methodologies.

Conclusion

In conclusion, we highlight how the hybrid methodology improves the conceptual schema quality. Finally, we outline our present work devoted to introduce automatic design techniques in the methodology on the basis of the logical programming.  相似文献   

5.
Towards an ontology-based retrieval of UML Class Diagrams   总被引:1,自引:0,他引:1  

Context

Software Reuse has always been an important area amongst software companies in order to increase their productivity and the quality of their products, but code reuse is not the only answer for this. Nowadays, reuse techniques proposals include software designs or even software specifications. Therefore, this research focuses on software design, specifically on UML Class Diagrams. A semantic technology has been applied to facilitate the retrieval process for an effective reuse.

Objective

This research proposes an ontology-based retrieval technique by semantic similarity in order to support effective retrieval process for UML Class Diagrams. Since UML Class Diagrams are a de facto standard in the design stages of a Software Development Process, a good technique is needed to reuse them, i.e. reusing during the design stage instead of just the coding stages.

Method

An application ontology modeled using UML specifications was designed to compare UML Class Diagram element types. To measure their similarity, a survey was conducted amongst UML experts. Query expansion was improved by a domain ontology supporting the retrieval phase. The calculus of minimal distances in ontologies was solved using a shortest path algorithm.

Results

The case study shows the domain ontology importance in the UML Class Diagram retrieval process as well as the importance of an element type expansion method, such as an application ontology. A correlation between the query complexity and retrieved elements has been identified, by analyzing results. Finally, a positive Return of Investment (ROI) was estimated using Poulin’s Model.

Conclusion

Because Software Reuse has not to be limited to the coding stage, approaches to reuse design stage must be developed, i.e. UML Class Diagrams reuse. This approach proposes a technique for UML Class Diagrams retrieval, which is one important step towards reuse. Semantic technology combined with information retrieval improves the retrieval results.  相似文献   

6.

Context

Software development is now facing much more challenges than ever before due to the intrinsic high complexity and the increasing demands of the quick-service-ready paradigm.

Objective

As the developers are now called for more quality software systems from the industries, there is insufficient guidance from the methodologies and standards of software engineering that can provide assistance to the rapid development of qualified business software.

Method

In this work, we discuss the advantages of the pattern-based software development. We verify the benefits using a pattern-based software framework called OS2F, and a corresponding system design architecture that is intended for the rapid development of web applications.

Results

The objective of the framework/architecture is that, through software patterns, developers should be able to separate the work of system development from the business rules so as to reduce the problems caused by a developer’s lack of business experiences.

Conclusion

Through a suitable pattern-based software framework, the quality of the product can thus be enhanced, software development time and cost decreased, and software evolution robustness improved.  相似文献   

7.
8.

Context

Source code revision control systems contain vast amounts of data that can be exploited for various purposes. For example, the data can be used as a base for estimating future code maintenance effort in order to plan software maintenance activities. Previous work has extensively studied the use of metrics extracted from object-oriented source code to estimate future coding effort. In comparison, the use of other types of metrics for this purpose has received significantly less attention.

Objective

This paper applies machine learning techniques to unveil predictors of yearly cumulative code churn of software projects on the basis of metrics extracted from revision control systems.

Method

The study is based on a collection of object-oriented code metrics, XML code metrics, and organisational metrics. Several models are constructed with different subsets of these metrics. The predictive power of these models is analysed based on a dataset extracted from eight open-source projects.

Results

The study shows that a code churn estimation model built purely with organisational metrics is superior to one built purely with code metrics. However, a combined model provides the highest predictive power.

Conclusion

The results suggest that code metrics in general, and XML metrics in particular, are complementary to organisational metrics for the purpose of estimating code churn.  相似文献   

9.

Context

Staff turnover in organizations is an important issue that should be taken into account mainly for two reasons:
1.
Employees carry an organization’s knowledge in their heads and take it with them wherever they go
2.
Knowledge accessibility is limited to the amount of knowledge employees want to share

Objective

The aim of this work is to provide a set of guidelines to develop knowledge-based Process Asset Libraries (PAL) to store software engineering best practices, implemented as a wiki.

Method

Fieldwork was carried out in a 2-year training course in agile development. This was validated in two phases (with and without PAL), which were subdivided into two stages: Training and Project.

Results

The study demonstrates that, on the one hand, the learning process can be facilitated using PAL to transfer software process knowledge, and on the other hand, products were developed by junior software engineers with a greater degree of independence.

Conclusion

PAL, as a knowledge repository, helps software engineers to learn about development processes and improves the use of agile processes.  相似文献   

10.
We investigate the typechecking problem for XML transformations: statically verifying that every answer to a transformation conforms to a given output schema, for inputs satisfying a given input schema. As typechecking quickly turns undecidable for query languages capable of testing equality of data values, we return to the limited framework where we abstract XML documents as labeled ordered trees. We focus on simple top-down recursive transformations motivated by XSLT and structural recursion on trees. We parameterize the problem by several restrictions on the transformations (deleting, non-deleting, bounded width) and consider both tree automata and DTDs as input and output schemas. The complexity of the typechecking problems in this scenario ranges from PTIME to EXPTIME.  相似文献   

11.

Context

During development managers, analysts and designers often need to know whether enough requirements analysis work has been done and whether or not it is safe to proceed to the design stage.

Objective

This paper describes a new, simple and practical method for assessing our confidence in a set of requirements.

Method

We identified four confidence factors and used a goal oriented framework with a simple ordinal scale to develop a method for assessing confidence. We illustrate the method and show how it has been applied to a real systems development project.

Results

We show how assessing confidence in the requirements could have revealed problems in this project earlier and so saved both time and money.

Conclusion

Our meta-level assessment of requirements provides a practical and pragmatic method that can prove useful to managers, analysts and designers who need to know when sufficient requirements analysis has been performed.  相似文献   

12.
13.

Context

An optimal software development process is regarded as being dependent on the situational characteristics of individual software development settings. Such characteristics include the nature of the application(s) under development, team size, requirements volatility and personnel experience. However, no comprehensive reference framework of the situational factors affecting the software development process is presently available.

Objective

The absence of such a comprehensive reference framework of the situational factors affecting the software development process is problematic not just because it inhibits our ability to optimise the software development process, but perhaps more importantly, because it potentially undermines our capacity to ascertain the key constraints and characteristics of a software development setting.

Method

To address this deficiency, we have consolidated a substantial body of related research into an initial reference framework of the situational factors affecting the software development process. To support the data consolidation, we have applied rigorous data coding techniques from Grounded Theory and we believe that the resulting framework represents an important contribution to the software engineering field of knowledge.

Results

The resulting reference framework of situational factors consists of eight classifications and 44 factors that inform the software process. We believe that the situational factor reference framework presented herein represents a sound initial reference framework for the key situational elements affecting the software process definition.

Conclusion

In addition to providing a useful reference listing for the research community and for committees engaged in the development of standards, the reference framework also provides support for practitioners who are challenged with defining and maintaining software development processes. Furthermore, this framework can be used to develop a profile of the situational characteristics of a software development setting, which in turn provides a sound foundation for software development process definition and optimisation.  相似文献   

14.

Context

In the long run, features of a software product line (SPL) evolve with respect to changes in stakeholder requirements and system contexts. Neither domain engineering nor requirements engineering handles such co-evolution of requirements and contexts explicitly, making it especially hard to reason about the impact of co-changes in complex scenarios.

Objective

In this paper, we propose a problem-oriented and value-based analysis method for variability evolution analysis. The method takes into account both kinds of changes (requirements and contexts) during the life of an evolving software product line.

Method

The proposed method extends the core requirements engineering ontology with the notions to represent variability-intensive problem decomposition and evolution. On the basis of problemorientation, the analysis method identifies candidate changes, detects influenced features, and evaluates their contributions to the value of the SPL.

Results and Conclusion

The process of applying the analysis method is illustrated using a concrete case study of an evolving enterprise software system, which has confirmed that tracing back to requirements and contextual changes is an effective way to understand the evolution of variability in the software product line.  相似文献   

15.

Context

Writing software for the current generation of parallel systems requires significant programmer effort, and the community is seeking alternatives that reduce effort while still achieving good performance.

Objective

Measure the effect of parallel programming models (message-passing vs. PRAM-like) on programmer effort.

Design, setting, and subjects

One group of subjects implemented sparse-matrix dense-vector multiplication using message-passing (MPI), and a second group solved the same problem using a PRAM-like model (XMTC). The subjects were students in two graduate-level classes: one class was taught MPI and the other was taught XMTC.

Main outcome measures

Development time, program correctness.

Results

Mean XMTC development time was 4.8 h less than mean MPI development time (95% confidence interval, 2.0-7.7), a 46% reduction. XMTC programs were more likely to be correct, but the difference in correctness rates was not statistically significant (p = .16).

Conclusions

XMTC solutions for this particular problem required less effort than MPI equivalents, but further studies are necessary which examine different types of problems and different levels of programmer experience.  相似文献   

16.
Improving the applicability of object-oriented class cohesion metrics   总被引:1,自引:0,他引:1  

Context

Class cohesion is an important object-oriented quality attribute. It refers to the degree of relatedness between the methods and attributes of a class. Several metrics have been proposed to measure the extent to which the class members are related. Most of these metrics have undefined values for a relatively high percentage of classes, which limits their applicability. The classes that have undefined values lack methods, attributes, or parameter types, or they include only a single method.

Objective

We improve the applicability of the class cohesion metrics by defining their values for such special classes. In addition, we theoretically and empirically validate the improved metrics.

Method

We theoretically examine whether the defined values satisfy the key cohesion properties. In addition, we empirically validate the metrics before and after the improvements to test whether the defined values improve the ability of the metrics to evaluate class cohesion. We also explore the correlation between the metrics and the presence of faulty classes to indirectly determine the strength or weakness of the metrics in indicating class quality.

Results

The results show that our assigned values for the undefined cases do not violate the key cohesion properties and considerably improve the ability of the metrics to explain the presence of faulty classes and may therefore improve their ability to indicate the quality of the class design.

Conclusions

Having the class cohesion metrics defined for all possible cases improves the applicability of the metrics and potentially increases their precision in indicating class quality.  相似文献   

17.
The capabilities of XSLT processing are widely used to transform XML documents into target XML documents. These target XML documents conform to output schemas of the used XSLT stylesheet. Output schemas of XSLT stylesheets can be used for a static analysis of the used XSLT stylesheet, to automatically detect the XSLT stylesheet of target XML documents or to reason on the output schema without access to the target XML documents. In this paper, we develop an approach to automatically determining the output schema of an XSLT stylesheet. We also describe several application scenarios of output schemas. The experimental evaluation shows that our prototype can determine the output schemas of nearly all typical XSLT stylesheets and the improvements in preciseness in several application scenarios when using output schemas in comparison to when not using output schemas.  相似文献   

18.
19.
Modeling process-related RBAC models with extended UML activity models   总被引:2,自引:0,他引:2  

Context

Business processes are an important source for the engineering of customized software systems and are constantly gaining attention in the area of software engineering as well as in the area of information and system security. While the need to integrate processes and role-based access control (RBAC) models has been repeatedly identified in research and practice, standard process modeling languages do not provide corresponding language elements.

Objective

In this paper, we are concerned with the definition of an integrated approach for modeling processes and process-related RBAC models - including roles, role hierarchies, statically and dynamically mutual exclusive tasks, as well as binding of duty constraints on tasks.

Method

We specify a formal metamodel for process-related RBAC models. Based on this formal model, we define a domain-specific extension for a standard modeling language.

Results

Our formal metamodel is generic and can be used to extend arbitrary process modeling languages. To demonstrate our approach, we present a corresponding extension for UML2 activity models. The name of our extension is Business Activities. Moreover, we implemented a library and runtime engine that can manage Business Activity runtime models and enforce the different policies and constraints in a software system.

Conclusion

The definition of process-related RBAC models at the modeling-level is an important prerequisite for the thorough implementation and enforcement of corresponding policies and constraints in a software system. We identified the need for modeling support of process-related RBAC models from our experience in real-world role engineering projects and case studies. The Business Activities approach presented in this paper is successfully applied in role engineering projects.  相似文献   

20.

Context

In recent years, many software companies have considered Software Process Improvement (SPI) as essential for successful software development. These companies have also shown special interest in IT Service Management (ITSM). SPI standards have evolved to incorporate ITSM best practices.

Objective

This paper presents a systematic literature review of ITSM Process Improvement initiatives based on the ISO/IEC 15504 standard for process assessment and improvement.

Method

A systematic literature review based on the guidelines proposed by Kitchenham and the review protocol template developed by Biolchini et al. is performed.

Results

Twenty-eight relevant studies related to ITSM Process Improvement have been found. From the analysis of these studies, nine different ITSM Process Improvement initiatives have been detected. Seven of these initiatives use ISO/IEC 15504 conformant process assessment methods.

Conclusion

During the last decade, in order to satisfy the on-going demand of mature software development companies for assessing and improving ITSM processes, different models which use the measurement framework of ISO/IEC 15504 have been developed. However, it is still necessary to define a method with the necessary guidelines to implement both software development processes and ITSM processes reducing the amount of effort, especially because some processes of both categories are overlapped.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号