共查询到20条相似文献,搜索用时 0 毫秒
1.
This paper presents the results of an empirical study on the subjective evaluation of code smells that identify poorly evolvable
structures in software. We propose use of the term software evolvability to describe the ease of further developing a piece
of software and outline the research area based on four different viewpoints. Furthermore, we describe the differences between
human evaluations and automatic program analysis based on software evolvability metrics. The empirical component is based
on a case study in a Finnish software product company, in which we studied two topics. First, we looked at the effect of the
evaluator when subjectively evaluating the existence of smells in code modules. We found that the use of smells for code evaluation
purposes can be difficult due to conflicting perceptions of different evaluators. However, the demographics of the evaluators
partly explain the variation. Second, we applied selected source code metrics for identifying four smells and compared these
results to the subjective evaluations. The metrics based on automatic program analysis and the human-based smell evaluations
did not fully correlate. Based upon our results, we suggest that organizations should make decisions regarding software evolvability
improvement based on a combination of subjective evaluations and code metrics. Due to the limitations of the study we also
recognize the need for conducting more refined studies and experiments in the area of software evolvability.
相似文献
Casper LasseniusEmail: |
2.
In a typical COBOL program, the data division consists of 50% of the lines of code. Automatic type inference can help to understand the large collections of variable declarations contained therein, showing how variables are related based on their actual usage. The most problematic aspect of type inference is pollution, the phenomenon that types become too large, and contain variables that intuitively should not belong to the same type. The aim of the paper is to provide empirical evidence for the hypothesis that the use of subtyping is an effective way for dealing with pollution. The main results include a tool set to carry out type inference experiments, a suite of metrics characterizing type inference outcomes, and the experimental observation that only one instance of pollution occurs in the case study conducted. 相似文献
3.
Context
Source code revision control systems contain vast amounts of data that can be exploited for various purposes. For example, the data can be used as a base for estimating future code maintenance effort in order to plan software maintenance activities. Previous work has extensively studied the use of metrics extracted from object-oriented source code to estimate future coding effort. In comparison, the use of other types of metrics for this purpose has received significantly less attention.Objective
This paper applies machine learning techniques to unveil predictors of yearly cumulative code churn of software projects on the basis of metrics extracted from revision control systems.Method
The study is based on a collection of object-oriented code metrics, XML code metrics, and organisational metrics. Several models are constructed with different subsets of these metrics. The predictive power of these models is analysed based on a dataset extracted from eight open-source projects.Results
The study shows that a code churn estimation model built purely with organisational metrics is superior to one built purely with code metrics. However, a combined model provides the highest predictive power.Conclusion
The results suggest that code metrics in general, and XML metrics in particular, are complementary to organisational metrics for the purpose of estimating code churn. 相似文献4.
There has been much recent interest in synthesis algorithms that generate finite state machines from scenarios of intended system behavior. One of the uses of such algorithms is in the transition from requirements scenarios to design. Despite much theoretical work on the nature of these algorithms, there has been very little work on applying the algorithms to practical applications. In this paper, we apply the Whittle & Schumann synthesis algorithm [32] to a component of an air traffic advisory system under development at NASA Ames Research Center. We not only apply the algorithm to generate state machine designs from scenarios but also show how to generate code from the generated state machines using existing commercial code generation tools. The results demonstrate the possibility of generating application code directly from scenarios of system behavior. 相似文献
5.
Debugging deployed systems is an arduous and time consuming task. It is often difficult to generate traces from deployed systems due to the disturbance and overhead that trace collection may cause on a system in operation. Many organizations also do not keep historical traces of failures. On the other hand earlier techniques focusing on fault diagnosis in deployed systems require a collection of passing–failing traces, in-house reproduction of faults or a historical collection of failed traces. In this paper, we investigate an alternative solution. We investigate how artificial faults, generated using software mutation in test environment, can be used to diagnose actual faults in deployed software systems. The use of traces of artificial faults can provide relief when it is not feasible to collect different kinds of traces from deployed systems. Using artificial and actual faults we also investigate the similarity of function call traces of different faults in functions. To achieve our goal, we use decision trees to build a model of traces generated from mutants and test it on faulty traces generated from actual programs. The application of our approach to various real world programs shows that mutants can indeed be used to diagnose faulty functions in the original code with approximately 60–100% accuracy on reviewing 10% or less of the code; whereas, contemporary techniques using pass–fail traces show poor results in the context of software maintenance. Our results also show that different faults in closely related functions occur with similar function call traces. The use of mutation in fault diagnosis shows promising results but the experiments also show the challenges related to using mutants. 相似文献
6.
David Binkley Author VitaeAuthor Vitae Mark Harman Author Vitae Author Vitae Kiarash Mahdavi Author Vitae 《Journal of Systems and Software》2008,81(12):2287-2298
Programs express domain-level concepts in their source code. It might be expected that such concepts would have a degree of semantic cohesion. This cohesion ought to manifest itself in the dependence between statements all of which contribute to the computation of the same concept. This paper addresses a set of research questions that capture this informal observation. It presents the results of experiments on 10 programs that explore the relationship between domain-level concepts and dependence in source code. The results show that code associated with concepts has a greater degree of coherence, with tighter dependence. This finding has positive implications for the analysis of concepts as it provides an approach to decompose a program into smaller executable units, each of which captures the behaviour of the program with respect to a domain-level concept. 相似文献
7.
Otávio Augusto Lazzarini Lemos Sushil Bajracharya Cristina Lopes 《Information and Software Technology》2011,53(4):294-306
Context
Software developers spend considerable effort implementing auxiliary functionality used by the main features of a system (e.g., compressing/decompressing files, encryption/decription of data, scaling/rotating images). With the increasing amount of open source code available on the Internet, time and effort can be saved by reusing these utilities through informal practices of code search and reuse. However, when this type of reuse is performed in an ad hoc manner, it can be tedious and error-prone: code results have to be manually inspected and integrated into the workspace.Objective
In this paper we introduce and evaluate the use of test cases as an interface for automating code search and reuse. We call our approach Test-Driven Code Search (TDCS). Test cases serve two purposes: (1) they define the behavior of the desired functionality to be searched; and (2) they test the matching results for suitability in the local context. We also describe CodeGenie, an Eclipse plugin we have developed that performs TDCS using a code search engine called Sourcerer.Method
Our evaluation consists of two studies: an applicability study with 34 different features that were searched using CodeGenie; and a performance study comparing CodeGenie, Google Code Search, and a manual approach.Results
Both studies present evidence of the applicability and good performance of TDCS in the reuse of auxiliary functionality.Conclusion
This paper presents an approach to source code search and its application to the reuse of auxiliary functionality. Our exploratory evaluation shows promising results, which motivates the use and further investigation of TDCS. 相似文献8.
Context
Code smells are manifestations of design flaws that can degrade code maintainability. So far, no research has investigated if these indicators are useful for conducting system-level maintainability evaluations.Aim
The research in this paper investigates the potential of code smells to reflect system-level indicators of maintainability.Method
We evaluated four medium-sized Java systems using code smells and compared the results against previous evaluations on the same systems based on expert judgment and the Chidamber and Kemerer suite of metrics. The systems were maintained over a period of up to 4 weeks. During maintenance, effort (person-hours) and number of defects were measured to validate the different evaluation approaches.Results
Most code smells are strongly influenced by size; consequently code smells are not good indicators for comparing the maintainability of systems differing greatly in size. Also, from the comparison of the different evaluation approaches, expert judgment was found as the most accurate and flexible since it considered effects due to the system's size and complexity and could adapt to different maintenance scenarios.Conclusion
Code smell approaches show promise as indicators of the need for maintenance in a way that other purely metric-based approaches lack. 相似文献9.
The majority of Free and Open Source Software (FOSS) developers are mobile and often use different identities in the projects or communities they participate in. These characteristics pose challenges for researchers studying the presence and contributions of developers across multiple repositories. In this paper, we present a methodology, employ various statistical measures, and leverage Bayesian networks to study the patterns of contribution of 502 developers in both Version Control System (VCS) and mailing list repositories in 20 GNOME projects. Our findings shows that only a small percentage of developers are contributing to both repositories and this cohort is making more commits than they are posting messages to mailing lists. The implications of these findings for understanding the patterns of contribution in FOSS projects and on the quality of the final product are discussed. 相似文献
10.
Source code documentation often contains summaries of source code written by authors. Recently, automatic source code summarization tools have emerged that generate summaries without requiring author intervention. These summaries are designed for readers to be able to understand the high-level concepts of the source code. Unfortunately, there is no agreed upon understanding of what makes up a “good summary.” This paper presents an empirical study examining summaries of source code written by authors, readers, and automatic source code summarization tools. This empirical study examines the textual similarity between source code and summaries of source code using Short Text Semantic Similarity metrics. We found that readers use source code in their summaries more than authors do. Additionally, this study finds that accuracy of a human written summary can be estimated by the textual similarity of that summary to the source code. 相似文献
11.
12.
Resource oriented selection of rule-based classification models: An empirical case study 总被引:1,自引:0,他引:1
The amount of resources allocated for software quality improvements is often not enough to achieve the desired software quality.
Software quality classification models that yield a risk-based quality estimation of program modules, such as fault-prone
(fp) and not fault-prone (nfp), are useful as software quality assurance techniques. Their usefulness is largely dependent on whether enough resources
are available for inspecting the fp modules. Since a given development project has its own budget and time limitations, a resource-based software quality improvement
seems more appropriate for achieving its quality goals. A classification model should provide quality improvement guidance
so as to maximize resource-utilization.
We present a procedure for building software quality classification models from the limited resources perspective. The essence
of the procedure is the use of our recently proposed Modified Expected Cost of Misclassification (MECM) measure for developing
resource-oriented software quality classification models. The measure penalizes a model, in terms of costs of misclassifications,
if the model predicts more number of fp modules than the number that can be inspected with the allotted resources. Our analysis is presented in the context of our
Rule-Based Classification Modeling (RBCM) technique. An empirical case study of a large-scale software system demonstrates
the promising results of using the MECM measure to select an appropriate resource-based rule-based classification model.
Taghi M. Khoshgoftaar is a professor of the Department of Computer Science and Engineering, Florida Atlantic University and the Director of the
graduate programs and research. His research interests are in software engineering, software metrics, software reliability
and quality engineering, computational intelligence applications, computer security, computer performance evaluation, data
mining, machine learning, statistical modeling, and intelligent data analysis. He has published more than 300 refereed papers
in these areas. He is a member of the IEEE, IEEE Computer Society, and IEEE Reliability Society. He was the general chair
of the IEEE International Conference on Tools with Artificial Intelligence 2005.
Naeem Seliya is an Assistant Professor of Computer and Information Science at the University of Michigan - Dearborn. He recieved his Ph.D.
in Computer Engineering from Florida Atlantic University, Boca Raton, FL, USA in 2005. His research interests include software
engineering, data mining and machine learnring, application and data security, bioinformatics and computational intelligence.
He is a member of IEEE and ACM. 相似文献
13.
The research reported in this paper addresses the application of artificial intelligence towards the automation of the engineering design process, in particular the structural design process. The application involves the development of the knowledge based expert system (KBES). The resulting KBES, called Expert-Seisd, is developed in Common Lisp on IBM PS/II system. In this paper, examples demonstrating the implementation of various aspects of the Expert-Seisd as applied to structural design are presented. These include user interface, database, knowledge base, inference engine, and knowledge acquisition. 相似文献
14.
E. Nasseri Author Vitae Author Vitae M. Shepperd Author Vitae 《Journal of Systems and Software》2010,83(2):303-315
Inheritance is a fundamental feature of the Object-Oriented (OO) paradigm. It is used to promote extensibility and reuse in OO systems. Understanding how systems evolve, and specifically, trends in the movement and re-location of classes in OO hierarchies can help us understand and predict future maintenance effort. In this paper, we explore how and where new classes were added as well as where existing classes were deleted or moved across inheritance hierarchies from multiple versions of four Java systems. We observed first, that in one of the studied systems the same set of classes was continuously moved across the inheritance hierarchy. Second, in the same system, the most frequent changes were restricted to just one sub-part of the overall system. Third, that a maximum of three levels may be a threshold when using inheritance in a system; beyond this level very little activity was observed, supporting earlier theories that, beyond three levels, complexity becomes overwhelming. We also found evidence of ‘collapsing’ hierarchies to bring classes up to shallower levels. Finally, we found that larger classes and highly coupled classes were more frequently moved than smaller and less coupled classes. Statistical evidence supported the view that larger classes and highly coupled classes were less cohesive than smaller classes and lowly coupled classes and were thus more suitable candidates for being moved (within an hierarchy). 相似文献
15.
An empirical study of a reverse engineering method for the aggregation relationship based on operation propagation 总被引:1,自引:0,他引:1
Dowming Yeh Pei-chen Sun William Chu Chien-Lung Lin Hongji Yang 《Empirical Software Engineering》2007,12(6):575-592
One of the major obstacles in reverse engineering legacy object-oriented systems is the identification of aggregation relationships.
An aggregation relationship, also called whole–part relationship, is a form of association relationship where an object is
considered as a part of another object. This characteristic is mostly of semantic nature; therefore, it is difficult to distinguish
aggregation from association relationships by implementation mechanism. Most reverse engineering methods for aggregation relationships
are based on the lifetime dependence of an object on another object since many implementations of aggregation relationships
result in such dependence. However, research literature shows that lifetime dependence is not really a primary property of
the aggregation relationships. A reverse engineering approach is proposed on the basis of a primary characteristic for aggregation
relationship—propagation of operations. To compare the propagation-based method with the lifetime-based method, we apply both
methods to ten class libraries, collect their output, and perform statistical analysis to determine the effectiveness of the
two methods. The analysis results show that the propagation-based method performs significantly better than the lifetime-based
method and by combining both methods simultaneously the complete aggregation relationships can be uncovered for the class
libraries in our experiment.
相似文献
Dowming YehEmail: |
16.
An empirical study of cycles among classes in Java 总被引:1,自引:1,他引:0
Advocates of the design principle avoid cyclic dependencies among modules have argued that cycles are detrimental to software quality attributes such as understandability, testability, reusability,
buildability and maintainability, yet folklore suggests such cycles are common in real object-oriented systems. In this paper
we present the first significant empirical study of cycles among the classes of 78 open- and closed-source Java applications.
We find that, of the applications comprising enough classes to support such a cycle, about 45% have a cycle involving at least
100 classes and around 10% have a cycle involving at least 1,000 classes. We present further empirical evidence to support
the contention these cycles are not due to intrinsic interdependencies between particular classes in a domain. Finally, we attempt to gauge the strength of connection
among the classes in a cycle using the concept of a minimum edge feedback set.
相似文献
Ewan TemperoEmail: |
17.
Taghi M. Khoshgoftaar Robert M. Szabo Timothy G. Woodcock 《Software Quality Journal》1994,3(3):137-151
In this paper, we report the results of a study conducted on a large commercial software system written in assembly language. Unlike studies of the past, our data represent the unit test, integration, and all categories of the maintenance phase: adaptive, perfective, and corrective. The results confirm that faults and change activity are related to software measurements. In addition, we report the relationship between the number of design change requests and software measurements. This new observation has the potential to aid the software engineering management process. Finally, we demonstrate the value of multiple regression models over simple regression models. 相似文献
18.
Code cloning is one of the active research areas in the software engineering community. Specifically, researchers have conducted numerous empirical studies on code cloning and reported that 7 % to 23 % of the code in a typical software system has been cloned. However, there was less awareness of code clones in dynamically-typed languages and most studies are limited to statically-typed languages such as Java, C, and C++. In addition, most previous studies did not consider different application domains such as standalone projects or web applications. As a result, very little is known about clones in dynamically-typed languages, such as JavaScript, in different application domains. In this paper, we report a large-scale clone detection experiment in a dynamically-typed programming language, JavaScript, for different application domains: web pages and standalone projects. Our experimental results showed that unlike JavaScript standalone projects, JavaScript web applications have 95 % of inter-file clones and 91–97 % of widely scattered clones. We observed that web application developers created clones intentionally and such clones may not be as risky as claimed in previous studies. Understanding the risks of cloning in web applications requires further studies, as cloning may be due to either good or bad intentions. Also, we identified unique development practices such as including browser-dependent or device-specific code in code clones of JavaScript web applications. This indicates that features of programming languages and technologies affect how developers duplicate code. 相似文献
19.
An empirical analysis of information retrieval based concept location techniques in software comprehension 总被引:1,自引:1,他引:1
Brendan Cleary Chris Exton Jim Buckley Michael English 《Empirical Software Engineering》2009,14(1):93-130
Concept location, the problem of associating human oriented concepts with their counterpart solution domain concepts, is a
fundamental problem that lies at the heart of software comprehension. Recent research has attempted to alleviate the impact
of the concept location problem through the application of methods drawn from the information retrieval (IR) community. Here
we present a new approach based on a complimentary IR method which also has a sound basis in cognitive theory. We compare
our approach to related work through an experiment and present our conclusions. This research adapts and expands upon existing
language modelling frameworks in IR for use in concept location, in software systems. In doing so it is novel in that it leverages
implicit information available in system documentation. Surprisingly, empirical evaluation of this approach showed little
performance benefit overall and several possible explanations are forwarded for this finding.
相似文献
Michael EnglishEmail: |
20.
Jingyue Li Finn Olav Bjørnson Reidar Conradi Vigdis B. Kampenes 《Empirical Software Engineering》2006,11(3):433-461
More and more software projects use Commercial-Off-The-Shelf (COTS) components. Although previous studies have proposed specific
COTS-based development processes, there are few empirical studies that investigate how to use and customize COTS-based development
processes for different project contexts. This paper describes an exploratory study of state-of-the-practice of COTS-based
development processes. Sixteen software projects in the Norwegian IT companies have been studied by structured interviews.
The results are that COTS-specific activities can be successfully incorporated in most traditional development processes (such
as waterfall or prototyping), given proper guidelines to reduce risks and provide specific assistance. We have identified
four COTS-specific activities—the build vs. buy decision, COTS component selection, learning and understanding COTS components,
and COTS component integration – and one new role, that of a knowledge keeper. We have also found a special COTS component
selection activity for unfamiliar components, combining Internet searches with hands-on trials. The process guidelines are
expressed as scenarios, problems encountered, and examples of good practice. They can be used to customize the actual development
processes, such as in which lifecycle phase to put the new activities into. Such customization crucially depends on the project
context, such as previous familiarity with possible COTS components and flexibility of requirements.
相似文献
Vigdis B. KampenesEmail: |