首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.

Context

Code smells are manifestations of design flaws that can degrade code maintainability. So far, no research has investigated if these indicators are useful for conducting system-level maintainability evaluations.

Aim

The research in this paper investigates the potential of code smells to reflect system-level indicators of maintainability.

Method

We evaluated four medium-sized Java systems using code smells and compared the results against previous evaluations on the same systems based on expert judgment and the Chidamber and Kemerer suite of metrics. The systems were maintained over a period of up to 4 weeks. During maintenance, effort (person-hours) and number of defects were measured to validate the different evaluation approaches.

Results

Most code smells are strongly influenced by size; consequently code smells are not good indicators for comparing the maintainability of systems differing greatly in size. Also, from the comparison of the different evaluation approaches, expert judgment was found as the most accurate and flexible since it considered effects due to the system's size and complexity and could adapt to different maintenance scenarios.

Conclusion

Code smell approaches show promise as indicators of the need for maintenance in a way that other purely metric-based approaches lack.  相似文献   

2.

Context

Source code revision control systems contain vast amounts of data that can be exploited for various purposes. For example, the data can be used as a base for estimating future code maintenance effort in order to plan software maintenance activities. Previous work has extensively studied the use of metrics extracted from object-oriented source code to estimate future coding effort. In comparison, the use of other types of metrics for this purpose has received significantly less attention.

Objective

This paper applies machine learning techniques to unveil predictors of yearly cumulative code churn of software projects on the basis of metrics extracted from revision control systems.

Method

The study is based on a collection of object-oriented code metrics, XML code metrics, and organisational metrics. Several models are constructed with different subsets of these metrics. The predictive power of these models is analysed based on a dataset extracted from eight open-source projects.

Results

The study shows that a code churn estimation model built purely with organisational metrics is superior to one built purely with code metrics. However, a combined model provides the highest predictive power.

Conclusion

The results suggest that code metrics in general, and XML metrics in particular, are complementary to organisational metrics for the purpose of estimating code churn.  相似文献   

3.
An empirical study of predicting software faults with case-based reasoning   总被引:1,自引:0,他引:1  
The resources allocated for software quality assurance and improvement have not increased with the ever-increasing need for better software quality. A targeted software quality inspection can detect faulty modules and reduce the number of faults occurring during operations. We present a software fault prediction modeling approach with case-based reasoning (CBR), a part of the computational intelligence field focusing on automated reasoning processes. A CBR system functions as a software fault prediction model by quantifying, for a module under development, the expected number of faults based on similar modules that were previously developed. Such a system is composed of a similarity function, the number of nearest neighbor cases used for fault prediction, and a solution algorithm. The selection of a particular similarity function and solution algorithm may affect the performance accuracy of a CBR-based software fault prediction system. This paper presents an empirical study investigating the effects of using three different similarity functions and two different solution algorithms on the prediction accuracy of our CBR system. The influence of varying the number of nearest neighbor cases on the performance accuracy is also explored. Moreover, the benefits of using metric-selection procedures for our CBR system is also evaluated. Case studies of a large legacy telecommunications system are used for our analysis. It is observed that the CBR system using the Mahalanobis distance similarity function and the inverse distance weighted solution algorithm yielded the best fault prediction. In addition, the CBR models have better performance than models based on multiple linear regression. Taghi M. Khoshgoftaar is a professor of the Department of Computer Science and Engineering, Florida Atlantic University and the Director of the Empirical Software Engineering Laboratory. His research interests are in software engineering, software metrics, software reliability and quality engineering, computational intelligence, computer performance evaluation, data mining, and statistical modeling. He has published more than 200 refereed papers in these areas. He has been a principal investigator and project leader in a number of projects with industry, government, and other research-sponsoring agencies. He is a member of the Association for Computing Machinery, the IEEE Computer Society, and IEEE Reliability Society. He served as the general chair of the 1999 International Symposium on Software Reliability Engineering (ISSRE’99), and the general chair of the 2001 International Conference on Engineering of Computer Based Systems. Also, he has served on technical program committees of various international conferences, symposia, and workshops. He has served as North American editor of the Software Quality Journal, and is on the editorial boards of the journals Empirical Software Engineering, Software Quality, and Fuzzy Systems. Naeem Seliya received the M.S. degree in Computer Science from Florida Atlantic University, Boca Raton, FL, USA, in 2001. He is currently a Ph.D. candidate in the Department of Computer Science and Engineering at Florida Atlantic University. His research interests include software engineering, computational intelligence, data mining, software measurement, software reliability and quality engineering, software architecture, computer data security, and network intrusion detection. He is a student member of the IEEE Computer Society and the Association for Computing Machinery.  相似文献   

4.
ContextSoftware defect prediction plays a crucial role in estimating the most defect-prone components of software, and a large number of studies have pursued improving prediction accuracy within a project or across projects. However, the rules for making an appropriate decision between within- and cross-project defect prediction when available historical data are insufficient remain unclear.ObjectiveThe objective of this work is to validate the feasibility of the predictor built with a simplified metric set for software defect prediction in different scenarios, and to investigate practical guidelines for the choice of training data, classifier and metric subset of a given project.MethodFirst, based on six typical classifiers, three types of predictors using the size of software metric set were constructed in three scenarios. Then, we validated the acceptable performance of the predictor based on Top-k metrics in terms of statistical methods. Finally, we attempted to minimize the Top-k metric subset by removing redundant metrics, and we tested the stability of such a minimum metric subset with one-way ANOVA tests.ResultsThe study has been conducted on 34 releases of 10 open-source projects available at the PROMISE repository. The findings indicate that the predictors built with either Top-k metrics or the minimum metric subset can provide an acceptable result compared with benchmark predictors. The guideline for choosing a suitable simplified metric set in different scenarios is presented in Table 12.ConclusionThe experimental results indicate that (1) the choice of training data for defect prediction should depend on the specific requirement of accuracy; (2) the predictor built with a simplified metric set works well and is very useful in case limited resources are supplied; (3) simple classifiers (e.g., Naïve Bayes) also tend to perform well when using a simplified metric set for defect prediction; and (4) in several cases, the minimum metric subset can be identified to facilitate the procedure of general defect prediction with acceptable loss of prediction precision in practice.  相似文献   

5.
This paper presents a case history of Mentor Graphics using a set of quality metrics to track development progress for a recent major software release. It provides background on how Mentor Graphics originally began using software metrics to measure product quality, how this became accepted, and how these metrics later fell out of favour. To restore these metrics to effective use, process changes were required for setting quality and metric targets, and for the way the metrics are used for tracking development progress. With these process changes in place, and the addition of a new metric, the case history demonstrates that the metric set could be used effectively to indicate problems in this release and help manage changes to the plan for completion of the release. The lessons learned in this case history are presented, along with subsequent data that further validates these metrics.  相似文献   

6.
A design pattern is a general reusable solution to commonly recurring problems in software projects. Bad smells are symptoms existing in the source code that possibly indicate the presence of a structural problem that requires code refactoring. Although design pattern and bad smells be different concepts, literature has shown that they may be related and cooccur during the evolution of a software system. This paper presents an empirical study that investigates cooccurrences of design patterns and bad smells as well as identifies the main factors that contribute to the emergence of the relationship between them. We carried out a case study with five Java systems to: (1) investigate if the use of design pattern reduces bad smell occurrence, (2) identify cooccurrences of design patterns and bad smells, and (3) identify situations that contribute for the cooccurrence emergence. As the main result, we found that the application of design pattern not necessarily avoid bad smell occurrences. The results also show that some design patterns such as composite, factory method, and singleton, are intrinsically modular and might be useful in creating high-quality systems. However, other design patterns such as adapter-command, proxy, and state-strategy, have presented high cooccurrence frequency with bad smells; therefore, they require attention in their implementation. Finally, via manual inspection in the components with cooccurrence, we found that the identified cooccurrences appeared due to poor planning and inadequate application of design patterns.  相似文献   

7.
Software is typically improved and modified in small increments (we refer to each of these increments as a modification record—MR). MRs are usually stored in a configuration management or version control system and can be retrieved for analysis. In this study we retrieved the MRs from several mature open software projects. We then concentrated our analysis on those MRs that fix defects and provided heuristics to automatically classify them. We used the information in the MRs to visualize what files are changed at the same time, and who are the people who tend to modify certain files. We argue that these visualizations can be used to understand the development stage of in which a project is at a given time (new features are added, or defects are being fixed), the level of modularization of a project, and how developers might interact between each other and the source code of a system.  相似文献   

8.
The design and analysis of the structure of software systems has typically been based on purely qualitative grounds. In this paper we report on our positive experience with a set of quantitative measures of software structure. These metrics, based on the number of possible paths of information flow through a given component, were used to evaluate the design and implementation of a software system (the UNIX operating system kernel) which exhibits the interconnectivity of components typical of large-scale software systems. Several examples are presented which show the power of this technique in locating a variety of both design and implementation defects. Suggested repairs, which agree with the commonly accepted principles of structured design and programming, are presented. The effect of these alterations on the structure of the system and the quantitative measurements of that structure lead to a convincing validation of the utility of information flow metrics.  相似文献   

9.
In this paper, we present an in-depth empirical study of a new metric, change dispersion, that measures the extent changes are scattered throughout the code of a software system. Intuitively, highly dispersed changes, the changes that are scattered throughout many software entities (such as files, classes, methods, and variables), should require more maintenance effort than the changes that only affect a few entities. In our research we investigate change dispersion on the code-base of a number of subject systems as a whole, and separately on each system's cloned and non-cloned code. Our central objective is to determine whether cloned code negatively affects software evolution and maintenance. The granularity of our focus is at the method level.Our experimental results on 16 open source subject systems written in four different programming languages (Java, C, C#, and Python) involving two clone detection tools (CCFinderX and NiCad) and considering three major types of clones (Type 1: exact, Type 2: dissimilar naming, and Type 3: some dissimilar code) suggests that change dispersion has a positive and statistically significant correlation with the change-proneness (or instability) of source code. Cloned code, especially in Java and C systems, often exhibits a higher change dispersion than non-cloned code. Also, changes to Type 3 clones are more dispersed compared to changes to Type 1 and Type 2 clones. According to our analysis, a primary cause of high change dispersion in cloned code is that clones from the same clone class often require corresponding changes to ensure they remain consistent.  相似文献   

10.
Many problem factors in the software development phase affect the maintainability of the delivered software systems. Therefore, understanding software development problem factors can help in not only reducing the incidence of project failure but can also ensure software maintainability. This study focuses on those software development problem factors which may possibly affect software maintainability. Twenty-five problem factors were classified into five dimensions; a questionnaire was designed and 137 software projects were surveyed. A K-means cluster analysis was performed to classify the projects into three groups of low, medium and high maintainability projects. For projects which had a higher level of severity of problem factors, the influence on software maintainability becomes more obvious. The influence of software process improvement (SPI) on project problems and the associated software maintainability was also examined in this study. Results suggest that SPI can help reduce the level of severity of the documentation quality and process management problems, and is only likely to enhance software maintainability to a medium level. Finally, the top 10 list of higher-severity software development problem factors was identified, and implications were discussed.  相似文献   

11.
12.
The use of capture-recapture to estimate the residual faults in a software artifact has evolved as a promising method. However, the assumptions needed to make the estimates are not completely fulfilled in software development, leading to an underestimation of the residual fault content. Therefore, a method employing a filtering technique with an experience factor to improve the estimate of the residual faults is proposed in this paper. An experimental study of the capture-recapture method with this correction method has been conducted. It is concluded that the correction method improves the capture-recapture estimate of the number of residual defects in the inspected document.  相似文献   

13.
Data collection, both automatic and manual, lies at the heart of all empirical studies. The quality of data collected from software informs decisions on maintenance, testing and wider issues such as the need for system re-engineering. While of the two types stated, automatic data collection is preferable, there are numerous occasions when manual data collection is unavoidable. Yet, very little evidence exists to assess the error-proneness of the latter. Herein, we investigate the extent to which manual data collection for Java software compared with its automatic counterpart for the same data. We investigate three hypotheses relating to the difference between automated and manual data collection. Five Java systems were used to support our investigation. Results showed that, as expected, manual data collection was error-prone, but nowhere near the extent we had initially envisaged. Key indicators of mistakes in manual data collection were found to be poor developer coding style, poor adherence to sound OO coding principles, and the existence of relatively large classes in some systems. Some interesting results were found relating to the collection of public class features and the types of error made during manual data collection. The study thus offers an insight into some of the typical problems associated with collecting data manually; more significantly, it highlights the problems that poorly written systems have on the quality of visually extracted data.  相似文献   

14.
There has been much recent interest in synthesis algorithms that generate finite state machines from scenarios of intended system behavior. One of the uses of such algorithms is in the transition from requirements scenarios to design. Despite much theoretical work on the nature of these algorithms, there has been very little work on applying the algorithms to practical applications. In this paper, we apply the Whittle & Schumann synthesis algorithm [32] to a component of an air traffic advisory system under development at NASA Ames Research Center. We not only apply the algorithm to generate state machine designs from scenarios but also show how to generate code from the generated state machines using existing commercial code generation tools. The results demonstrate the possibility of generating application code directly from scenarios of system behavior.  相似文献   

15.
The amount of resources allocated for software quality improvements is often not enough to achieve the desired software quality. Software quality classification models that yield a risk-based quality estimation of program modules, such as fault-prone (fp) and not fault-prone (nfp), are useful as software quality assurance techniques. Their usefulness is largely dependent on whether enough resources are available for inspecting the fp modules. Since a given development project has its own budget and time limitations, a resource-based software quality improvement seems more appropriate for achieving its quality goals. A classification model should provide quality improvement guidance so as to maximize resource-utilization. We present a procedure for building software quality classification models from the limited resources perspective. The essence of the procedure is the use of our recently proposed Modified Expected Cost of Misclassification (MECM) measure for developing resource-oriented software quality classification models. The measure penalizes a model, in terms of costs of misclassifications, if the model predicts more number of fp modules than the number that can be inspected with the allotted resources. Our analysis is presented in the context of our Rule-Based Classification Modeling (RBCM) technique. An empirical case study of a large-scale software system demonstrates the promising results of using the MECM measure to select an appropriate resource-based rule-based classification model. Taghi M. Khoshgoftaar is a professor of the Department of Computer Science and Engineering, Florida Atlantic University and the Director of the graduate programs and research. His research interests are in software engineering, software metrics, software reliability and quality engineering, computational intelligence applications, computer security, computer performance evaluation, data mining, machine learning, statistical modeling, and intelligent data analysis. He has published more than 300 refereed papers in these areas. He is a member of the IEEE, IEEE Computer Society, and IEEE Reliability Society. He was the general chair of the IEEE International Conference on Tools with Artificial Intelligence 2005. Naeem Seliya is an Assistant Professor of Computer and Information Science at the University of Michigan - Dearborn. He recieved his Ph.D. in Computer Engineering from Florida Atlantic University, Boca Raton, FL, USA in 2005. His research interests include software engineering, data mining and machine learnring, application and data security, bioinformatics and computational intelligence. He is a member of IEEE and ACM.  相似文献   

16.
Motivation, although difficult to quantify, is considered to be the single largest factor in developer productivity; there are also suggestions that low motivation is an important factor in software development project failure. We investigate factors that motivate software engineering teams using survey data collected from software engineering practitioners based in Australia, Chile, USA and Vietnam. We also investigate the relationship between team motivation and project outcome, identifying whether the country in which software engineering practitioners are based affects this relationship. Analysis of 333 questionnaires indicates that failed projects are associated with low team motivation. We found a set of six common team motivational factors that appear to be culturally independent (project manager has good communication with project staff, project risks reassessed, controlled and managed during the project, customer has confidence in the project manager and the development team, the working environment is good, the team works well together, and the software engineer had a pleasant experience). We also found unique groupings of team motivational factors for each of the countries investigated. This indicates that there are cultural differences that project managers need to consider when working in a global environment.  相似文献   

17.
18.
ContextBuilding a quality software product in the shortest possible time to satisfy the global market demand gives an enterprise a competitive advantage. However, uncertainties and risks exist at every stage of a software development project. These can have an extremely high influence on the success of the final software product. Early risk management practice is effective to manage such risks and contributes effectively towards the project success.ObjectiveDespite risk management approaches, a detailed guideline that explains where to integrate risk management activities into the project is still missing. Little effort has been directed towards the evaluation of the overall impact of a risk management method. We present a Goal-driven Software Development Risk Management Model (GSRM) and its explicit integration into the requirements engineering phase and an empirical investigation result of applying GSRM into a project.MethodWe combine the case study method with action research so that the results from the case study directly contribute to manage the studied project risks and to identify ways to improve the proposed methodology. The data is collected from multiple sources and analysed both in a qualitative and quantitative way.ResultsWhen risk factors are beyond the control of the project manager and project environment, it is difficult to control these risks. The project scope affects all the dimensions of risk. GSRM is a reasonable risk management method that can be employed in an industrial context. The study results have been compared against other study results in order to generalise findings and identify contextual factors.ConclusionA formal early stage risk management practice provides early warning related to the problems that exists in a project, and it contributes to the overall project success. It is not necessary to always consider budget and schedule constraints as top priority. There exist issues such as requirements, change management, and user satisfaction which can influence these constraints.  相似文献   

19.
Reuse of software assets in application development has held promise but faced challenges. In addressing these challenges, research has focused on organizational- and project-level factors while neglecting grass-root level adoption of reusable assets. Our research investigated factors associated with individual software developers’ intention to reuse software assets and integrated them in TAM. Towards that end, 13 project managers were interviewed and 207 software developers were surveyed in India. Results revealed that the technological-level (infrastructure), and individual-level factors (reuse-related experience and self-efficacy) were major determinants. Implications are discussed.  相似文献   

20.
Evolving diverse ensembles using genetic programming has recently been proposed for classification problems with unbalanced data. Population diversity is crucial for evolving effective algorithms. Multilevel selection strategies that involve additional colonization and migration operations have shown better performance in some applications. Therefore, in this paper, we are interested in analysing the performance of evolving diverse ensembles using genetic programming for software defect prediction with unbalanced data by using different selection strategies. We use colonization and migration operators along with three ensemble selection strategies for the multi-objective evolutionary algorithm. We compare the performance of the operators for software defect prediction datasets with varying levels of data imbalance. Moreover, to generalize the results, gain a broader view and understand the underlying effects, we replicated the same experiments on UCI datasets, which are often used in the evolutionary computing community. The use of multilevel selection strategies provides reliable results with relatively fast convergence speeds and outperforms the other evolutionary algorithms that are often used in this research area and investigated in this paper. This paper also presented a promising ensemble strategy based on a simple convex hull approach and at the same time it raised the question whether ensemble strategy based on the whole population should also be investigated.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号