首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
ContextObject-oriented software undergoes continuous changes—changes often made without consideration of the software’s overall structure and design rationale. Hence, over time, the design quality of the software degrades causing software aging or software decay. Refactoring offers a means of restructuring software design to improve maintainability. In practice, efforts to invest in refactoring are restricted; therefore, the problem calls for a method for identifying cost-effective refactorings that efficiently improve maintainability. Cost-effectiveness of applied refactorings can be explained as maintainability improvement over invested refactoring effort (cost). For the system, the more cost-effective refactorings are applied, the greater maintainability would be improved. There have been several studies of supporting the arguments that changes are more prone to occur in the pieces of codes more frequently utilized by users; hence, applying refactorings in these parts would fast improve maintainability of software. For this reason, dynamic information is needed for identifying the entities involved in given scenarios/functions of a system, and within these entities, refactoring candidates need to be extracted.ObjectiveThis paper provides an automated approach to identifying cost-effective refactorings using dynamic information in object-oriented software.MethodTo perform cost-effective refactoring, refactoring candidates are extracted in a way that reduces dependencies; these are referred to as the dynamic information. The dynamic profiling technique is used to obtain the dependencies of entities based on dynamic method calls. Based on those dynamic dependencies, refactoring-candidate extraction rules are defined, and a maintainability evaluation function is established. Then, refactoring candidates are extracted and assessed using the defined rules and the evaluation function, respectively. The best refactoring (i.e., that which most improves maintainability) is selected from among refactoring candidates, then refactoring candidate extraction and assessment are re-performed to select the next refactoring, and the refactoring identification process is iterated until no more refactoring candidates for improving maintainability are found.ResultsWe evaluate our proposed approach in three open-source projects. The first results show that dynamic information is helpful in identifying cost-effective refactorings that fast improve maintainability; and, considering dynamic information in addition to static information provides even more opportunities to identify cost-effective refactorings. The second results show that dynamic information is helpful in extracting refactoring candidates in the classes where real changes had occurred; in addition, the results also offer the promising support for the contention that using dynamic information helps to extracting refactoring candidates from highly-ranked frequently changed classes.ConclusionOur proposed approach helps to identify cost-effective refactorings and supports an automated refactoring identification process.  相似文献   

2.
Placement of attributes/methods within classes in an object-oriented system is usually guided by conceptual criteria and aided by appropriate metrics. Moving state and behavior between classes can help reduce coupling and increase cohesion, but it is nontrivial to identify where such refactorings should be applied. In this paper, we propose a methodology for the identification of Move Method refactoring opportunities that constitute a way for solving many common Feature Envy bad smells. An algorithm that employs the notion of distance between system entities (attributes/methods) and classes extracts a list of behavior-preserving refactorings based on the examination of a set of preconditions. In practice, a software system may exhibit such problems in many different places. Therefore, our approach measures the effect of all refactoring suggestions based on a novel Entity Placement metric that quantifies how well entities have been placed in system classes. The proposed methodology can be regarded as a semi-automatic approach since the designer will eventually decide whether a suggested refactoring should be applied or not based on conceptual or other design quality criteria. The evaluation of the proposed approach has been performed considering qualitative, metric, conceptual, and efficiency aspects of the suggested refactorings in a number of open-source projects.  相似文献   

3.
ContextApplication of a refactoring operation creates a new set of dependency in the revised design as well as a new set of further refactoring candidates. In the studies of stepwise refactoring recommendation approaches, applying one refactoring at a time has been used, but is inefficient because the identification of the best candidate in each iteration of refactoring identification process is computation-intensive. Therefore, it is desirable to accurately identify multiple and independent candidates to enhance efficiency of refactoring process.ObjectiveWe propose an automated approach to identify multiple refactorings that can be applied simultaneously to maximize the maintainability improvement of software. Our approach can attain the same degree of maintainability enhancement as the method of the refactoring identification of the single best one, but in fewer iterations (lower computation cost).MethodThe concept of maximal independent set (MIS) enables us to identify multiple refactoring operations that can be applied simultaneously. Each MIS contains a group of refactoring candidates that neither affect (i.e., enable or disable) nor influence maintainability on each other. Refactoring effect delta table quantifies the degree of maintainability improvement each elementary candidate. For each iteration of the refactoring identification process, multiple refactorings that best improve maintainability are selected among sets of refactoring candidates (MISs).ResultsWe demonstrate the effectiveness and efficiency of the proposed approach by simulating the refactoring operations on several large-scale open source projects such as jEdit, Columba, and JGit. The results show that our proposed approach can improve maintainability by the same degree or to a better extent than the competing method, choosing one refactoring candidate at a time, in a significantly smaller number of iterations. Thus, applying multiple refactorings at a time is both effective and efficient.ConclusionOur proposed approach helps improve the maintainability as well as the productivity of refactoring identification.  相似文献   

4.
In the current trend, Extreme Programing methodology is widely adopted by small and medium-sized projects for dealing with rapidly or indefinite changing requirements. Test-first strategy and code refactoring are the important practices of Extreme Programing for rapid development and quality support. The test-first strategy emphasizes that test cases are designed before system implementation to keep the correctness of artifacts during software development; whereas refactoring is the removal of “bad smell” code for improving quality without changing its semantics. However, the test-first strategy may conflict with code refactoring in the sense that the original test cases may be broken or inefficient for testing programs, which are revised by code refactoring. In general, the developers revise the test cases manually since it is not complicated. However, when the developers perform a pattern-based refactoring to improve the quality, the effort of revising the test cases is much more than that in simple code refactoring. In our observation, a pattern-based refactoring is composed of many simple and atomic code refactorings. If we have the composition relationship and the mapping rules between code refactoring and test case refactoring, we may infer a test case revision guideline in pattern-based refactoring. Based on this idea, in this research, we propose a four-phase approach to guide the construction of the test case refactoring for design patterns. We also introduce our approach by using some well-known design patterns and evaluate its feasibility by means of test coverage.  相似文献   

5.
Exception handling design can improve robustness, which is an important quality attribute of software. However, exception handling design remains one of the less understood and considered parts in software development. In addition, like most software design problems, even if developers are requested to design with exception handling beforehand, it is very difficult to get the right design at the first shot. Therefore, improving exception handling design after software is constructed is necessary. This paper applies refactoring to incrementally improve exception handling design. We first establish four exception handling goals to stage the refactoring actions. Next, we introduce exception handling smells that hinder the achievement of the goals and propose exception handling refactorings to eliminate the smells. We suggest exception handling refactoring is best driven by bug fixing because it provides measurable quality improvement results that explicitly reveal the benefits of refactoring. We conduct a case study with the proposed refactorings on a real world banking application and provide a cost-effectiveness analysis. The result shows that our approach can effectively improve exception handling design, enhance software robustness, and save maintenance cost. Our approach simplifies the process of applying big exception handling refactoring by dividing the process into clearly defined intermediate milestones that are easily exercised and verified. The approach can be applied in general software development and in legacy system maintenance.  相似文献   

6.
Software models, defined as code abstractions, are iteratively refined, restructured, and evolved due to many reasons such as reflecting changes in requirements or modifying a design to enhance existing features. For understanding the evolution of a model a-posteriori, change detection approaches have been proposed for models. The majority of existing approaches are successful to detect atomic changes. However, composite changes, such as refactorings, are difficult to detect due to several possible combinations of atomic changes or eventually hidden changes in intermediate model versions that may be no longer available. Moreover, a multitude of refactoring sequences may be used to describe the same model evolution. In this paper, we propose a multi-objective approach to detect model changes as a sequence of refactorings. Our approach takes as input an exhaustive list of possible types of model refactoring operations, the initial model, and the revised model, and generates as output a list of refactoring applications representing a good compromise between the following two objectives (i) maximize the similarity between the expected revised model and the generated model after applying the refactoring sequence on the initial model, and (ii) minimize the number of atomic changes used to describe the evolution. In fact, minimizing the number of atomic changes can important since it is maybe easier for a designer to understand and analyze a sequence of refactorings (composite model changes) rather than an equivalent large list of atomic changes (Weissgerber and Diehl 2006). Due to the huge number of possible refactoring sequences, a metaheuristic search method is used to explore the space of possible solutions. To this end, we use the non-dominated sorting genetic algorithm (NSGA-II) to find the best trade-off between our two objectives. The paper reports on the results of an empirical study of our multi-objective model changes detection technique as applied on various versions of real-world models taken from open source projects and one industrial project. We compared our approach to the simple deterministic greedy algorithm, multi-objective particle swarm optimization (MOPSO), an existing mono-objective changes detection approach, and two model changes detection tools not based on computational search. The statistical test results provide evidence to support the claim that our proposal enables the generation of changes detection solutions with correctness higher than 85 %, in average, using a variety of real-world scenarios.  相似文献   

7.
刘阳  刘秋荣  刘辉 《计算机科学》2015,42(12):105-107
软件重构历史的自动检测是目前软件重构领域的一个研究热点。其主要目的是方便程序员或软件维护人员理解 软件演化的历史,也便于根据服务代码重构历史对其客户代码进行相应的重构操作。虽然相关研究人员已经提出了多种自动化的重构历史检测方法,但目前未见关于函数提取重构历史检测的方法或工具。为此,提出了一种基于版本比较的函数抽取重构自动检测方法,实现并验证了该方法的有效性。在8个开源项目上进行了实验验证,结果表明其查准率为65%~90%。此外,在一个小型项目上通过监控程序员的重构操作获得了全部的函数提取重构操作,进而计算出检测算法的查全率和查准率均为85%。  相似文献   

8.
Refactoring is a widely accepted technique to improve the structure of object-oriented software. Nevertheless, existing tool support remains restricted to automatically applying refactoring transformations. Deciding what to refactor and which refactoring to apply still remains a difficult manual process, due to the many dependencies and interrelationships between relevant refactorings. In this paper, we represent refactorings as graph transformations, and we propose the technique of critical pair analysis to detect the implicit dependencies between refactorings. The results of this analysis can help the developer to make an informed decision of which refactoring is most suitable in a given context and why. We report on several experiments we carried out in the AGG graph transformation tool to support our claims.  相似文献   

9.
Refactoring can result in code with improved maintainability and is considered a preventive maintenance activity. Managers of large projects need ways to decide where to apply scarce resources when performing refactoring. There is a lack of tools for supporting such decisions. We introduce a rank-based software measure-driven refactoring decision support approach to assist managers. The approach uses various static measures to develop a weighted rank, ranking classes or packages that need refactoring. We undertook two case studies to examine the effectiveness of the approach. Specifically, we wanted to see if the decision support tool yielded results similar to those of human analysts/managers and in less time so that it can be used to augment human decision making. In the first study, we found that our approach identified classes as needing refactoring that were also identified by humans. In the second study, a hierarchical approach was used to identify packages that had actually been refactored in 15 releases of the open source project Tomcat. We examined the overlap between the tool’s findings and the actual refactoring activities. The tool reached 100/86.7% recall on the package/class level. Though these studies were limited in size and scope, it appears that this approach is worthy of further examination.  相似文献   

10.
面向方面编程是一种新的编程范型,而面向方面重构则是当前面向方面软件开发中的一个研究热点。首先对面向方面重构进行了分类研究,然后引入基于角色的横切关注点重构方法,最后在此基础上提出一种基于模版的面向方面重构框架。  相似文献   

11.
One purpose of software metrics is to measure the quality of programs. The results can be for example used to predict maintenance costs or improve code quality. An emerging view is that if software metrics are going to be used to improve quality, they must help in finding code that should be refactored. Often refactoring or applying a design pattern is related to the role of the class to be refactored. In client-based metrics, a project gives the class a context. These metrics measure how a class is used by other classes in the context. We present a new client-based metric LCIC (Lack of Coherence in Clients), which analyses if the class being measured has a coherent set of roles in the program. Interfaces represent the roles of classes. If a class does not have a coherent set of roles, it should be refactored, or a new interface should be defined for the class.We have implemented a tool for measuring the metric LCIC for Java projects in the Eclipse environment. We calculated LCIC values for classes of several open source projects. We compare these results with results of other related metrics, and inspect the measured classes to find out what kind of refactorings are needed. We also analyse the relation of different design patterns and refactorings to our metric. Our experiments reveal the usefulness of client-based metrics to improve the quality of code.  相似文献   

12.
Refactoring has been reported as a helpful technique to systematically improve non-functional attributes of software. This paper evaluates the relevance of refactoring for improving usability on web applications. We conducted an experiment with two replications at different locations, with subjects of different profiles. Objects chosen for the experiment were two e-commerce applications that exhibit common business processes in today’s web usage. Through the experiment we found that half of the studied refactorings cause a significant improvement in usability. The rest of the refactorings required a post-hoc analysis in which we considered aspects like user expertise in the interaction with web applications or type of application. We conclude that, when improving quality in use, the success of the refactoring process depends on several factors, including the type of software system, context and users. We have analyzed all these aspects, which developers must consider for a better decision support at the time of prioritizing improvements and outweighing effort.  相似文献   

13.
Refactorings can be used to improve the structure of software artefacts while preserving the semantics of the encapsulated information. Various types of refactorings have been proposed and implemented for programming languages (e.g., Java or C#). With the advent of (MDSD), a wealth of modelling languages rises and the need for restructuring models similar to programs has emerged. Since parts of these modelling languages are often very similar, we consider it beneficial to reuse the core transformation steps of refactorings across languages. In this sense, reusing the abstract transformation steps and the abstract participating elements suggests itself. Previous work in this field indicates that refactorings can be specified generically to foster their reuse. However, existing approaches can handle certain types of modelling languages only and solely reuse refactorings once per language. In this paper, a novel approach based on role models to specify generic refactorings is presented. Role models are suitable for this problem since they support declaration of roles which have to be played in a certain context. Assigned to generic refactoring, contexts are different refactorings and roles are the participating elements. We discuss how this resolves the limitations of previous works, as well as how specific refactorings can be defined as extensions to generic ones. The approach was implemented in our tool Refactory based on the (EMF) and evaluated using multiple modelling languages and refactorings. In addition, this paper investigates on the recommendation of refactoring specifications. This is motivated by the fact that language designers have many possibilities to enable refactorings in their modelling languages with regard to the language structures. To overcome this problem and to support language designers in deciding which refactorings to enable, we propose a solution and a prototypical implementation.  相似文献   

14.
Refactoring large systems involves several sources of uncertainty related to the severity levels of code smells to be corrected and the importance of the classes in which the smells are located. Both severity and importance of identified refactoring opportunities (e.g. code smells) are difficult to estimate. In fact, due to the dynamic nature of software development, these values cannot be accurately determined in practice, leading to refactoring sequences that lack robustness. In addition, some code fragments can contain severe quality issues but they are not playing an important role in the system. To address this problem, we introduced a multi-objective robust model, based on NSGA-II, for the software refactoring problem that tries to find the best trade-off between three objectives to maximize: quality improvements, severity and importance of refactoring opportunities to be fixed. We evaluated our approach using 8 open source systems and one industrial project, and demonstrated that it is significantly better than state-of-the-art refactoring approaches in terms of robustness in all the experiments based on a variety of real-world scenarios. Our suggested refactoring solutions were found to be comparable in terms of quality to those suggested by existing approaches, better prioritization of refactoring opportunities and to carry an acceptable robustness price.  相似文献   

15.
重构C++程序物理设计   总被引:1,自引:1,他引:1  
整合重构的基本思想和物理设计的基本技术,提出了物理重构的概念.它是对软件物理结构的再设计,目的是在不改变软件外在行为的前提下,调整软件组织结构,从而提高软件的开发效率和可维护性等.在此基础上,提出用“识别-重构-评估”的迭代过程来实施物理重构,并介绍了常用的物理重构方法.实例研究表明,物理重构能够有效地优化系统的物理结构,使开发者从多个角度持续改善软件质量.  相似文献   

16.
Search-based software engineering (SBSE) solutions are still not scalable enough to handle high-dimensional objectives space. The majority of existing work treats software engineering problems from a single or bi-objective point of view, where the main goal is to maximize or minimize one or two objectives. However, most software engineering problems are naturally complex in which many conflicting objectives need to be optimized. Software refactoring is one of these problems involving finding a compromise between several quality attributes to improve the quality of the system while preserving the behavior. To this end, we propose a novel representation of the refactoring problem as a many-objective one where every quality attribute to improve is considered as an independent objective to be optimized. In our approach based on the recent NSGA-III algorithm, the refactoring solutions are evaluated using a set of 8 distinct objectives. We evaluated this approach on one industrial project and seven open source systems. We compared our findings to: several other many-objective techniques (IBEA, MOEA/D, GrEA, and DBEA-Eps), an existing multi-objective approach a mono-objective technique and an existing refactoring technique not based on heuristic search. Statistical analysis of our experiments over 31 runs shows the efficiency of our approach.  相似文献   

17.
Extract method is one of the most popular software refactorings. However, little work has been done to investigate or validate the major motivations for such refactorings. Digging into this issue might help researchers to improve tool support for extract method refactorings, e.g., proposing better tools to recommend refactoring opportunities, and to select fragments to be extracted. To this end, we conducted an interview with 25 developers, and our results suggest that current reuse, decomposition of long methods, clone resolution, and future reuse are the major motivations for extract method refactorings.We also validated the results by analyzing the refactoring history of seven open-source applications. Analysis results suggest that current reuse was the primary motivation for 56% of extract method refactorings, decomposition of methods was the primary motivation for 28% of extract method refactorings, and clone resolution was the primary motivation for 16% of extract method refactorings. These findings might suggest that recommending extract method opportunities by analyzing only the inner structure (e.g., complexity and length) of methods alone would miss many extract method opportunities. These findings also suggest that extract method refactorings are often driven by current and immediate reuse. Consequently, how to recognize or predict reuse requirements timely during software evolution may play a key role in the recommendation and automation of extract method refactorings. We also investigated the likelihood for the extracted methods to be reused in future, and our results suggest that such methods have a small chance Received April 2, 2015; accepted November 10, 2015 E-mail: Liuhui08@bit.edu.cn (12%) to be reused in future unless the extracted fragment could be reused immediately in software evolution and extracting such a fragment can resolve existing clones at the same time.  相似文献   

18.
In spite of several decades of software metrics research and practice, there is little understanding of how software metrics relate to one another, nor is there any established methodology for comparing them. We propose a novel experimental technique, based on search-based refactoring, to ‘animate’ metrics and observe their behaviour in a practical setting. Our aim is to promote metrics to the level of active, opinionated objects that can be compared experimentally to uncover where they conflict, and to understand better the underlying cause of the conflict. Our experimental approaches include semi-random refactoring, refactoring for increased metric agreement/disagreement, refactoring to increase/decrease the gap between a pair of metrics, and targeted hypothesis testing. We apply our approach to five popular cohesion metrics using ten real-world Java systems, involving 330,000 lines of code and the application of over 78,000 refactorings. Our results demonstrate that cohesion metrics disagree with each other in a remarkable 55 % of cases, that Low-level Similarity-based Class Cohesion (LSCC) is the best representative of the set of metrics we investigate while Sensitive Class Cohesion (SCOM) is the least representative, and we discover several hitherto unknown differences between the examined metrics. We also use our approach to investigate the impact of including inheritance in a cohesion metric definition and find that doing so dramatically changes the metric.  相似文献   

19.
Context: Feature model is an appropriate and indispensable tool for modeling similarities and differences among products of the Software Product Line (SPL). It not only exposes the validity of the products’ configurations in an SPL but also changes in the course of time to support new requirements of the SPL. Modifications made on the feature model in the course of time raise a number of issues. Useless enlargements of the feature model, the existence of dead features, and violated constraints in the feature model are some of the key problems that make its maintenance difficult.Objective: The initial approach to dealing with the above-mentioned problems and improving maintainability of the feature model is refactoring. Refactoring modifies software artifacts in a way that their externally visible behavior does not change.Method: We introduce a method for defining refactoring rules and executing them on the feature model. We use the ATL model transformation language to define the refactoring rules. Moreover, we provide an Alloy model to check the feature model and the safety of the refactorings that are performed on it.Results: In this research, we propose a safe framework for refactoring a feature model. This framework enables users to perform automatic and semi-automatic refactoring on the feature model.Conclusions: Automated tool support for refactoring is a key issue for adopting approaches such as utilizing feature models and integrating them into the software development process of companies. In this work, we define some of the important refactoring rules on the feature model and provide tools that enable users to add new rules using the ATL M2M language. Our framework assesses the correctness of the refactorings using the Alloy language.  相似文献   

20.
An intrinsic property of software in a real-world environment is its need to evolve, which is usually accompanied by the increase of software complexity and deterioration of software quality, making software maintenance a tough problem. Refactoring is regarded as an effective way to address this problem. Many refactoring approaches at the method and class level have been proposed. But the research on software refactoring at the package level is very little. This paper presents a novel approach to refactor the package structures of object oriented software. It uses software networks to represent classes and their dependencies. It proposes a constrained community detection algorithm to obtain the optimized community structures in software networks, which also correspond to the optimized package structures. And it finally provides a list of classes as refactoring candidates by comparing the optimized package structures with the real package structures. The empirical evaluation of the proposed approach has been performed in two open source Java projects, and the benefits of our approach are illustrated in comparison with the other three approaches.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号