首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Improving the applicability of object-oriented class cohesion metrics   总被引:1,自引:0,他引:1  

Context

Class cohesion is an important object-oriented quality attribute. It refers to the degree of relatedness between the methods and attributes of a class. Several metrics have been proposed to measure the extent to which the class members are related. Most of these metrics have undefined values for a relatively high percentage of classes, which limits their applicability. The classes that have undefined values lack methods, attributes, or parameter types, or they include only a single method.

Objective

We improve the applicability of the class cohesion metrics by defining their values for such special classes. In addition, we theoretically and empirically validate the improved metrics.

Method

We theoretically examine whether the defined values satisfy the key cohesion properties. In addition, we empirically validate the metrics before and after the improvements to test whether the defined values improve the ability of the metrics to evaluate class cohesion. We also explore the correlation between the metrics and the presence of faulty classes to indirectly determine the strength or weakness of the metrics in indicating class quality.

Results

The results show that our assigned values for the undefined cases do not violate the key cohesion properties and considerably improve the ability of the metrics to explain the presence of faulty classes and may therefore improve their ability to indicate the quality of the class design.

Conclusions

Having the class cohesion metrics defined for all possible cases improves the applicability of the metrics and potentially increases their precision in indicating class quality.  相似文献   

2.

Context

Component identification, the process of evolving legacy system into finely organized component-based software systems, is a critical part of software reengineering. Currently, many component identification approaches have been developed based on agglomerative hierarchical clustering algorithms. However, there is a lack of thorough investigation on which algorithm is appropriate for component identification.

Objective

This paper focuses on analyzing agglomerative hierarchical clustering algorithms in software reengineering, and then identifying their respective strengths and weaknesses in order to apply them effectively for future practical applications.

Method

A series of experiments were conducted for 18 clustering strategies combined according to various similarity measures, weighting schemes and linkage methods. Eleven subject systems with different application domains and source code sizes were used in the experiments. The component identification results are evaluated by the proposed size, coupling and cohesion criteria.

Results

The experimental results suggested that the employed similarity measures, weighting schemes and linkage methods can have various effects on component identification results with respect to the proposed size, coupling and cohesion criteria, so the hierarchical clustering algorithms produced quite different clustering results.

Conclusions

According to the experimental results, it can be concluded that it is difficult to produce perfectly satisfactory results for a given clustering algorithm. Nevertheless, these algorithms demonstrated varied capabilities to identify components with respect to the proposed size, coupling and cohesion criteria.  相似文献   

3.

Context

Source code revision control systems contain vast amounts of data that can be exploited for various purposes. For example, the data can be used as a base for estimating future code maintenance effort in order to plan software maintenance activities. Previous work has extensively studied the use of metrics extracted from object-oriented source code to estimate future coding effort. In comparison, the use of other types of metrics for this purpose has received significantly less attention.

Objective

This paper applies machine learning techniques to unveil predictors of yearly cumulative code churn of software projects on the basis of metrics extracted from revision control systems.

Method

The study is based on a collection of object-oriented code metrics, XML code metrics, and organisational metrics. Several models are constructed with different subsets of these metrics. The predictive power of these models is analysed based on a dataset extracted from eight open-source projects.

Results

The study shows that a code churn estimation model built purely with organisational metrics is superior to one built purely with code metrics. However, a combined model provides the highest predictive power.

Conclusion

The results suggest that code metrics in general, and XML metrics in particular, are complementary to organisational metrics for the purpose of estimating code churn.  相似文献   

4.
A methodology to assess the impact of design patterns on software quality   总被引:1,自引:0,他引:1  

Context

Software quality is considered to be one of the most important concerns of software production teams. Additionally, design patterns are documented solutions to common design problems that are expected to enhance software quality. Until now, the results on the effect of design patterns on software quality are controversial.

Aims

This study aims to propose a methodology for comparing design patterns to alternative designs with an analytical method. Additionally, the study illustrates the methodology by comparing three design patterns with two alternative solutions, with respect to several quality attributes.

Method

The paper introduces a theoretical/analytical methodology to compare sets of “canonical” solutions to design problems. The study is theoretical in the sense that the solutions are disconnected from real systems, even though they stem from concrete problems. The study is analytical in the sense that the solutions are compared based on their possible numbers of classes and on equations representing the values of the various structural quality attributes in function of these numbers of classes. The exploratory designs have been produced by studying the literature, by investigating open-source projects and by using design patterns. In addition to that, we have created a tool that helps practitioners in choosing the optimal design solution, according to their special needs.

Results

The results of our research suggest that the decision of applying a design pattern is usually a trade-off, because patterns are not universally good or bad. Patterns typically improve certain aspects of software quality, while they might weaken some other.

Conclusions

Concluding the proposed methodology is applicable for comparing patterns and alternative designs, and highlights existing threshold that when surpassed the design pattern is getting more or less beneficial than the alternative design. More specifically, the identification of such thresholds can become very useful for decision making during system design and refactoring.  相似文献   

5.

Context

Modern software engineering demands professionals and researchers to proactively and collectively work towards exploring and experimenting viable and valuable mechanisms in order to extract all kinds of degenerative bugs, security holes, and possible deviations at the initial stage. Having understood the real need here, we have introduced a novel methodology for the estimation of defect proneness of class structures in object oriented (OO) software systems at design stage.

Objective

The objective of this work is to develop an estimation model that provides significant assessment of defect proneness of object oriented software packages at design phase of SDLC. This frame work enhances the efficiency of SDLC through design quality improvement.

Method

This involves a data driven methodology which is based on the empirical study of the relationship existing between design parameters and defect proneness. In the first phase, a mapping of the relationship between the design metrics and normal occurrence pattern of defects are carried out. This is represented as a set of non linear multifunctional regression equations which reflects the influence of individual design metrics on defect proneness. The defect proneness estimation model is then generated by weighted linear combination of these multifunctional regression equations. The weighted coefficients are evaluated through GQM (Goal Question Metric) paradigm.

Results

The model evaluation and validation is carried out with a selected set of cases which is found to be promising. The current study is successfully dealt with three projects and it opens up the opportunity to extend this to a wide range of projects across industries.

Conclusion

The defect proneness estimation at design stage facilitates an effective feedback to the design architect and enabling him to identify and reduce the number of defects in the modules appropriately. This results in a considerable improvement in software design leading to cost effective products.  相似文献   

6.

Context

Automated static analysis (ASA) identifies potential source code anomalies early in the software development lifecycle that could lead to field failures. Excessive alert generation and a large proportion of unimportant or incorrect alerts (unactionable alerts) may cause developers to reject the use of ASA. Techniques that identify anomalies important enough for developers to fix (actionable alerts) may increase the usefulness of ASA in practice.

Objective

The goal of this work is to synthesize available research results to inform evidence-based selection of actionable alert identification techniques (AAIT).

Method

Relevant studies about AAITs were gathered via a systematic literature review.

Results

We selected 21 peer-reviewed studies of AAITs. The techniques use alert type selection; contextual information; data fusion; graph theory; machine learning; mathematical and statistical models; or dynamic detection to classify and prioritize actionable alerts. All of the AAITs are evaluated via an example with a variety of evaluation metrics.

Conclusion

The selected studies support (with varying strength), the premise that the effective use of ASA is improved by supplementing ASA with an AAIT. Seven of the 21 selected studies reported the precision of the proposed AAITs. The two studies with the highest precision built models using the subject program’s history. Precision measures how well a technique identifies true actionable alerts out of all predicted actionable alerts. Precision does not measure the number of actionable alerts missed by an AAIT or how well an AAIT identifies unactionable alerts. Inconsistent use of evaluation metrics, subject programs, and ASAs in the selected studies preclude meta-analysis and prevent the current results from informing evidence-based selection of an AAIT. We propose building on an actionable alert identification benchmark for comparison and evaluation of AAIT from literature on a standard set of subjects and utilizing a common set of evaluation metrics.  相似文献   

7.

Context

Writing software for the current generation of parallel systems requires significant programmer effort, and the community is seeking alternatives that reduce effort while still achieving good performance.

Objective

Measure the effect of parallel programming models (message-passing vs. PRAM-like) on programmer effort.

Design, setting, and subjects

One group of subjects implemented sparse-matrix dense-vector multiplication using message-passing (MPI), and a second group solved the same problem using a PRAM-like model (XMTC). The subjects were students in two graduate-level classes: one class was taught MPI and the other was taught XMTC.

Main outcome measures

Development time, program correctness.

Results

Mean XMTC development time was 4.8 h less than mean MPI development time (95% confidence interval, 2.0-7.7), a 46% reduction. XMTC programs were more likely to be correct, but the difference in correctness rates was not statistically significant (p = .16).

Conclusions

XMTC solutions for this particular problem required less effort than MPI equivalents, but further studies are necessary which examine different types of problems and different levels of programmer experience.  相似文献   

8.

Context

In order to ensure high quality of a process model repository, refactoring operations can be applied to correct anti-patterns, such as overlap of process models, inconsistent labeling of activities and overly complex models. However, if a process model collection is created and maintained by different people over a longer period of time, manual detection of such refactoring opportunities becomes difficult, simply due to the number of processes in the repository. Consequently, there is a need for techniques to detect refactoring opportunities automatically.

Objective

This paper proposes a technique for automatically detecting refactoring opportunities.

Method

We developed the technique based on metrics that can be used to measure the consistency of activity labels as well as the extent to which processes overlap and the type of overlap that they have. We evaluated it, by applying it to two large process model repositories.

Results

The evaluation shows that the technique can be used to pinpoint the approximate location of three types of refactoring opportunities with high precision and recall and of one type of refactoring opportunity with high recall, but low precision.

Conclusion

We conclude that the technique presented in this paper can be used in practice to automatically detect a number of anti-patterns that can be corrected by refactoring.  相似文献   

9.

Context

Staff turnover in organizations is an important issue that should be taken into account mainly for two reasons:
1.
Employees carry an organization’s knowledge in their heads and take it with them wherever they go
2.
Knowledge accessibility is limited to the amount of knowledge employees want to share

Objective

The aim of this work is to provide a set of guidelines to develop knowledge-based Process Asset Libraries (PAL) to store software engineering best practices, implemented as a wiki.

Method

Fieldwork was carried out in a 2-year training course in agile development. This was validated in two phases (with and without PAL), which were subdivided into two stages: Training and Project.

Results

The study demonstrates that, on the one hand, the learning process can be facilitated using PAL to transfer software process knowledge, and on the other hand, products were developed by junior software engineers with a greater degree of independence.

Conclusion

PAL, as a knowledge repository, helps software engineers to learn about development processes and improves the use of agile processes.  相似文献   

10.

Context

Agile software development with its emphasis on producing working code through frequent releases, extensive client interactions and iterative development has emerged as an alternative to traditional plan-based software development methods. While a number of case studies have provided insights into the use and consequences of agile, few empirical studies have examined the factors that drive the adoption and use of agile.

Objective

We draw on intention-based theories and a dialectic perspective to identify factors driving the use of agile practices among adopters of this software development methodology.

Method

Data for the study was gathered through an anonymous online survey of software development professionals. We requested participation from members of a selected list of online discussion groups, and received 98 responses.

Results

Our analyses reveal that subjective norm and training play a significant role in influencing software developers’ use of agile processes and methods, while perceived benefits and perceived limitations are not primary drivers of agile use among adopters. Interestingly, perceived benefit emerges as a significant predictor of agile use only if adopters face hindrances to their agile practices.

Conclusion

We conclude that research in the adoption of software development innovations should examine the effects of both enabling and detracting factors and the interactions between them. Since training, subjective norm, and the interplay between perceived benefits and perceived hindrances appear to be key factors influencing the adoption of agile methods, researchers can focus on how to (a) perform training on agile methods more effectively, (b) facilitate the dialog between developers and managers about perceived benefits and hindrances, and (c) capitalize on subjective norm to publicize the benefits of agile methods within an organization. Further, when managing the transition to new software development methods, we recommend that practitioners adapt their strategies and tactics contingent on the extent of perceived hindrances to the change.  相似文献   

11.
12.

Context

Testing is an essential part of the development life-cycle of any software product. While most phases of data warehouse design have received considerable attention in the literature, not much has been written about data warehouse testing.

Objective

In this paper we propose a comprehensive approach to testing data warehouse systems. Its main features are earliness with respect to the life-cycle, modularity, tight coupling with design, scalability, and measurability through proper metrics.

Method

We introduce a number of specific testing activities, we classify them in terms of what is tested and how it is tested, and we show how they can be framed within a prototype-based methodology. We apply our approach to a real case study for a large retail company.

Results

The case study we faced, based on an iterative prototype-based medium-size project, confirmed the validity of our approach. In particular, the main benefits were obtained in terms of project transparency, coordination of the development team, and organization of design activities.

Conclusion

Though some general-purpose testing techniques can be applied to data warehouse projects, the effectiveness of testing can be largely improved by applying specifically-devised techniques and metrics.  相似文献   

13.

Context

Pointer analysis is an important building block of optimizing compilers and program analyzers for C language. Various methods with precision and performance trade-offs have been proposed. Among them, cycle elimination has been successfully used to improve the scalability of context-insensitive pointer analyses without losing any precision.

Objective

In this article, we present a new method on context-sensitive pointer analysis with an effective application of cycle elimination.

Method

To obtain similar benefits of cycle elimination for context-sensitive analysis, we propose a novel constraint-based formulation that uses sets of contexts as annotations. Our method is not based on binary decision diagram (BDD). Instead, we directly use invocation graphs to represent context sets and apply a hash-consing technique to deal with the exponential blow-up of contexts.

Result

Experimental results on C programs ranging from 20,000 to 290,000 lines show that applying cycle elimination to our new formulation results in 4.5 ×speedup over the previous BDD-based approach.

Conclusion

We showed that cycle elimination is an effective method for improving the scalability of context-sensitive pointer analysis.  相似文献   

14.

Context

Mutation testing is a fault-injection-based technique to help testers generate test cases for detecting specific and predetermined types of faults.

Objective

Before mutation testing can be effectively applied to embedded systems, traditional mutation testing needs to be modified. To inject a fault into an embedded system without causing any system failure or hardware damage is a challenging task as it requires some knowledge of the underlying layers such as the kernel and the corresponding hardware.

Method

We propose a set of mutation operators for embedded systems using kernel-based software and hardware fault simulation. These operators are designed for software developers so that they can use the mutation technique to test the entire system after the software is integrated with the kernel and hardware devices.

Results

A case study on a programmable logic controller for a digital reactor protection system in a nuclear power plant is conducted. Our results suggest that the proposed mutation operators are useful for fault-injection and this is evidenced by the fact that faults not injected by us were discovered in the subject software as a result of the case study.

Conclusion

We conclude that our mutation operators are useful for integration testing of an embedded system.  相似文献   

15.

Context

Comparing and contrasting evidence from multiple studies is necessary to build knowledge and reach conclusions about the empirical support for a phenomenon. Therefore, research synthesis is at the center of the scientific enterprise in the software engineering discipline.

Objective

The objective of this article is to contribute to a better understanding of the challenges in synthesizing software engineering research and their implications for the progress of research and practice.

Method

A tertiary study of journal articles and full proceedings papers from the inception of evidence-based software engineering was performed to assess the types and methods of research synthesis in systematic reviews in software engineering.

Results

As many as half of the 49 reviews included in the study did not contain any synthesis. Of the studies that did contain synthesis, two thirds performed a narrative or a thematic synthesis. Only a few studies adequately demonstrated a robust, academic approach to research synthesis.

Conclusion

We concluded that, despite the focus on systematic reviews, there is limited attention paid to research synthesis in software engineering. This trend needs to change and a repertoire of synthesis methods needs to be an integral part of systematic reviews to increase their significance and utility for research and practice.  相似文献   

16.

Objective

To determine personal and workplace factors associated with quad bike loss of control events (LCEs) on New Zealand farms.

Methods

Rural community databases were used to sample 130 farmers and farm employees (workers). Fieldwork and survey investigated for prevalence of LCEs; farm type; farm terrain; personal measures; and vehicle driving exposures.

Results

Seventy nine workers (61%) described a total of 200 LCEs. Increased driver height, increased body mass, non-flat farm terrain, increased driving speed and distance, and greater whole body vibration exposure were significantly associated with LCEs.

Conclusions

Taller and heavier drivers of quad bikes should be particularly vigilant for risk of an LCE. Vehicle speed, distance driven and choice of driving routes over difficult terrain are potentially modifiable factors which have behavioural components and should be considered as management strategies for reducing risk of on-farm quad bike LCEs.

Relevance to industry

Quad bike accidents are a considerable problem in agriculture. This research has identified a number of physical and driving factors that should be considered in the management strategies for reducing risk of on-farm quad bike accidents.  相似文献   

17.

Context

A software reference architecture is a generic architecture for a class of systems that is used as a foundation for the design of concrete architectures from this class. The generic nature of reference architectures leads to a less defined architecture design and application contexts, which makes the architecture goal definition and architecture design non-trivial steps, rooted in uncertainty.

Objective

The paper presents a structured and comprehensive study on the congruence between context, goals, and design of software reference architectures. It proposes a tool for the design of congruent reference architectures and for the analysis of the level of congruence of existing reference architectures.

Method

We define a framework for congruent reference architectures. The framework is based on state of the art results from literature and practice. We validate our framework and its quality as analytical tool by applying it for the analysis of 24 reference architectures. The conclusions from our analysis are compared to the opinions of experts on these reference architectures documented in literature and dedicated communication.

Results

Our framework consists of a multi-dimensional classification space and of five types of reference architectures that are formed by combining specific values from the multi-dimensional classification space. Reference architectures that can be classified in one of these types have better chances to become a success. The validation of our framework confirms its quality as a tool for the analysis of the congruence of software reference architectures.

Conclusion

This paper facilitates software architects and scientists in the inception, design, and application of congruent software reference architectures. The application of the tool improves the chance for success of a reference architecture.  相似文献   

18.
19.

Background

Many papers are published on the topic of software metrics but it is difficult to assess the current status of metrics research.

Aim

This paper aims to identify trends in influential software metrics papers and assess the possibility of using secondary studies to integrate research results.

Method

Search facilities in the SCOPUS tool were used to identify the most cited papers in the years 2000-2005 inclusive. Less cited papers were also selected from 2005. The selected papers were classified according factors such as to main topic, goal and type (empirical or theoretical or mixed). Papers classified as “Evaluation studies” were assessed to investigate the extent to which results could be synthesized.

Results

Compared with less cited papers, the most cited papers were more frequently journal papers, and empirical validation or data analysis studies. However, there were problems with some empirical validation studies. For example, they sometimes attempted to evaluate theoretically invalid metrics and fail to appreciate the importance of the context in which data are collected.

Conclusions

This paper, together with other similar papers, confirms that there is a large body of research related to software metrics. However, software metrics researchers may need to refine their empirical methodology before they can answer useful empirical questions.  相似文献   

20.

Context

Software productivity measurement is essential in order to control and improve the performance of software development. For example, by identifying role models (e.g. projects, individuals, tasks) when comparing productivity data. The prediction is of relevance to determine whether corrective actions are needed, and to discover which alternative improvement action would yield the best results.

Objective

In this study we identify studies for software productivity prediction and measurement. Based on the identified studies we first create a classification scheme and map the studies into the scheme (systematic map). Thereafter, a detailed analysis and synthesis of the studies is conducted.

Method

As a research method for systematically identifying and aggregating the evidence of productivity measurement and prediction approaches systematic mapping and systematic review have been used.

Results

In total 38 studies have been identified, resulting in a classification scheme for empirical research on software productivity. The mapping allowed to identify the rigor of the evidence with respect to the different productivity approaches. In the detailed analysis the results were tabulated and synthesized to provide recommendations to practitioners.

Conclusion

Risks with simple ratio-based measurement approaches were shown. In response to the problems data envelopment analysis seems to be a strong approach to capture multivariate productivity measures, and allows to identify reference projects to which inefficient projects should be compared. Regarding simulation no general prediction model can be identified. Simulation and statistical process control are promising methods for software productivity prediction. Overall, further evidence is needed to make stronger claims and recommendations. In particular, the discussion of validity threats should become standard, and models need to be compared with each other.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号