首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 30 毫秒
1.
Making changes to software systems can prove costly and it remains a challenge to understand the factors that affect the costs of software evolution. This study sought to identify such factors by investigating the effort expended by developers to perform 336 change tasks in two different software organizations. We quantitatively analyzed data from version control systems and change trackers to identify factors that correlated with change effort. In-depth interviews with the developers about a subset of the change tasks further refined the analysis. Two central quantitative results found that dispersion of changed code and volatility of the requirements for the change task correlated with change effort. The analysis of the qualitative interviews pointed to two important, underlying cost drivers: Difficulties in comprehending dispersed code and difficulties in anticipating side effects of changes. This study demonstrates a novel method for combining qualitative and quantitative analysis to assess cost drivers of software evolution. Given our findings, we propose improvements to practices and development tools to manage and reduce the costs.  相似文献   

2.
We conducted a long term experiment to compare the costs and benefits of several different software inspection methods. These methods were applied by professional developers to a commercial software product they were creating. Because the laboratory for this experiment was a live development effort, we took special care to minimize cost and risk to the project, while maximizing our ability to gather useful data. The article has several goals: (1) to describe the experiment's design and show how we used simulation techniques to optimize it; (2) to present our results and discuss their implications for both software practitioners and researchers; and (3) to discuss several new questions raised by our findings. For each inspection, we randomly assigned three independent variables: (1) the number of reviewers on each inspection team (1, 2, or 4); (2) the number of teams inspecting the code unit (1 or 2); and (3) the requirement that defects be repaired between the first and second team's inspections. The reviewers for each inspection were randomly selected without replacement from a pool of 11 experienced software developers. The dependent variables for each inspection included inspection interval (elapsed time), total effort, and the defect detection rate. Our results showed that these treatments did not significantly influence the defect detection effectiveness, but that certain combinations of changes dramatically increased the inspection interval  相似文献   

3.
软件复用是在软件开发中避免重复劳动的解决方案.在复用一个已有的软件项目时,软件开发人员通常需要理解某些代码元素以及其间的关联关系,称之为代码结构.软件开发人员一般通过浏览软件源代码的方式理解代码结构.由于源代码往往规模较大且结构复杂,理解代码结构通常会耗费大量的时间与精力.因此,将软件开发人员想要理解的代码结构自动、清晰地展示出来是很有帮助的.提出一种基于图数据库的代码结构解析与搜索方法以实现这一目的.这一方法可对软件的代码结构进行解析,并在图数据库中对其进行有效的组织和管理.搜索时,软件开发人员输入自然语言查询语句,该方法中的搜索机制会分析查询语句,并从图数据库中截取出与其相对应的代码结构进行展示.该方法具有高度的可扩展性:不同粒度的结点与多样化的关联关系可以容易地存储进图数据库中,且面向不同搜索目的的代码结构搜索算法亦可以容易地集成进搜索机制中.这一方法已在相应的工具中得到了实现,其有效性在一个商业案例研究中得到了验证.  相似文献   

4.
Traditional software engineering processes are composed of practices defined by roles, activities and artifacts. Software developers have their own understanding of practices and their own ways of implementing them, which could result in variations in software development practices. This paper presents an empirical study based on six teams of five students each, involving three different projects. Their process practices are monitored by time slips based on the effort expended on various process-related activities. This study introduces a new 3-pole graphical representation to represent the process patterns of effort expended on the various discipline activities. The purpose of this study is to quantify activity patterns in the actual process, which in turn demonstrates the variability of process performance. This empirical study provides three examples of patterns based on three empirical axes (engineering, coding and V&V). The idea behind this research is to make developers aware that there is wide variability in the actual process, and that process assessments might be weakly related to actual process activities. This study suggests that in-process monitoring is required to control the process activities. In-process monitoring is likely to provide causal information between the actual process activities and the quality of the implemented components.  相似文献   

5.
The relationship among software design quality, development effort, and governance practices is a traditional research problem. However, the extent to which consolidated results on this relationship remain valid for open source (OS) projects is an open research problem. An emerging body of literature contrasts the view of open source as an alternative to proprietary software and explains that there exists a continuum between closed and open source projects. This paper hypothesizes that as projects approach the OS end of the continuum, governance becomes less formal. In turn a less formal governance is hypothesized to require a higher-quality code as a means to facilitate coordination among developers by making the structure of code explicit and facilitate quality by removing the pressure of deadlines from contributors. However, a less formal governance is also hypothesized to increase development effort due to a more cumbersome coordination overhead. The verification of research hypotheses is based on empirical data from a sample of 75 major OS projects. Empirical evidence supports our hypotheses and suggests that software quality, mainly measured as coupling and inheritance, does not increase development effort, but represents an important managerial variable to implement the more open governance approach that characterizes OS projects which, in turn, increases development effort.  相似文献   

6.
Although numerous empirical studies have been conducted to measure the fault detection capability of software analysis methods, few studies have been conducted using programs of similar size and characteristics. Therefore, it is difficult to derive meaningful conclusions on the relative detection ability and cost‐effectiveness of various fault detection methods. In order to compare fault detection capability objectively, experiments must be conducted using the same set of programs to evaluate all methods and must involve participants who possess comparable levels of technical expertise. One such experiment was ‘Conflict1’, which compared voting, a testing method, self‐checks, code reading by stepwise refinement and data‐flow analysis methods on eight versions of a battle simulation program. Since an inspection method was not included in the comparison, the authors conducted a follow‐up experiment ‘Conflict2’, in which five of the eight versions from Conflict1 were subjected to Fagan inspection. Conflict2 examined not only the number and types of faults detected by each method, but also the cost‐effectiveness of each method, by comparing the average amount of effort expended in detecting faults. The primary findings of the Conflict2 experiment are the following. First, voting detected the largest number of faults, followed by the testing method, Fagan inspection, self‐checks, code reading and data‐flow analysis. Second, the voting, testing and inspection methods were largely complementary to each other in the types of faults detected. Third, inspection was far more cost‐effective than the testing method studied. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

7.
Not a day goes by that the general public does not come into contact with a real-time system. As their numbers and importance grow, so do the stakes for software developers. A failure in a critical application may result in great financial loss-or even loss of life. More effort must be expended to analyze the reliability and safety of such systems. Analysis of hardware components in critical applications has matured over the years and commonly followed techniques have emerged. However, methods and techniques for analyzing the reliability and safety of the software part of critical applications are relatively new and still maturing. Yet the vulnerability of the system to software failures is on the rise and may (and in some cases does) exceed hardware failures. Software is not only becoming more prevalent in real-time systems, it is becoming a larger part of real-time systems, in the sense that the amount of effort expended in designing and implementing the software is a larger proportion of the total expended effort  相似文献   

8.
Software development is rarely an individual effort and generally involves teams of developers collaborating to generate good reliable code. Among the software code there exist technical dependencies that arise from software components using services from other components. The different ways of assigning the design, development, and testing of these software modules to people can cause various coordination problems among them. We claim that the collaboration of the developers, designers and testers must be related to and governed by the technical task structure. These collaboration practices are handled in what we call Socio-Technical Patterns.

The TESNA project (Technical Social Network Analysis) we report on in this paper addresses this issue. We propose a method and a tool that a project manager can use in order to detect the socio-technical coordination problems. We test the method and tool in a case study of a small and innovative software product company.  相似文献   

9.
The manifestation of code anomalies in software systems often indicates symptoms of architecture degradation. Several approaches have been proposed to detect such anomalies in the source code. However, most of them fail to assist developers in prioritizing anomalies harmful to the software architecture of a system. This article presents an investigation on how developers, when supported by architecture blueprints, are able to prioritize architecturally relevant code anomalies. First, we performed a controlled experiment where participants explored both blueprints and source code to reveal architecturally relevant code anomalies. Although the use of blueprints has the potential to improve code anomaly prioritization, the participants often made several mistakes. We found these mistakes might occur because developers miss relationships between implementation and blueprint elements when they prioritize anomalies in an ad hoc manner. Furthermore, the time spent on the prioritization process was considerably high. Aiming to improve the accuracy and effectiveness of the process, we provided means to automate the prioritization process. In particular, we explored 3 prioritization criteria, which establish different ways of relating the blueprint elements with code anomalies. These criteria were implemented in the JSpIRIT tool. The approach was evaluated in the context of 2 applications with satisfactory precision results.  相似文献   

10.
Finding software faults is a problematic activity in many systems. Existing approaches usually work close to the system implementation and require developers to perform different code analyses. Although these approaches are effective, the amount of information to be managed by developers is often overwhelming. This problem calls for complementary approaches able to work at higher levels of abstraction than code, helping developers to keep intellectual control over the system when analyzing faults. In this context, we present an expert‐system approach, called FLABot, which assists developers in fault‐localization tasks by reasoning about faults using software architecture models. We have evaluated a prototype of FLABot in two medium‐size case studies, involving novice and non‐novice developers. We compared time consumed, code browsed and faults found by these developers, with and without the support of FLABot, observing interesting effort reductions when applying FLABot. The results and lessons learned have shown that our approach is practical and reduces the efforts for finding individual faults.  相似文献   

11.
The empirical study described in the paper addresses software reading for construction: how application developers obtain an understanding of a software artifact for use in new system development. The study focuses on the processes that developers would engage in when learning and using object oriented frameworks. We analyzed 15 student software development projects using both qualitative and quantitative methods to gain insight into what processes occurred during framework usage. The contribution of the study is not to test predefined hypotheses but to generate well-supported hypotheses for further investigation. The main hypotheses produced are that example based techniques are well suited to use by beginning learners, while hierarchy based techniques are not, because of a larger learning curve. Other more specific hypotheses are proposed and discussed.  相似文献   

12.
林泽琦  邹艳珍  赵俊峰  曹英魁  谢冰 《软件学报》2019,30(12):3714-3729
自然语言文本形式的文档是软件项目的重要组成部分.如何帮助开发者在大量文档中进行高效、准确的信息定位,是软件复用领域中的一个重要研究问题.提出了一种基于代码结构知识的软件文档语义搜索方法.该方法从软件项目的源代码中解析出代码结构图,并以此作为领域特定的知识来帮助机器理解自然语言文本的语义.这一语义信息与信息检索技术相结合,从而实现了对软件文档的语义检索.在StackOverflow问答文档数据集上的实验表明,与多种文本检索方法相比,该方法在平均准确率(mean average precision,简称MAP)上可以取得至少13.77%的提升.  相似文献   

13.
Correcting software defects accounts for a significant amount of resources in a software project. To make best use of testing efforts, researchers have studied statistical models to predict in which parts of a software system future defects are likely to occur. By studying the mathematical relations between predictor variables used in these models, researchers can form an increased understanding of the important connections between development activities and software quality. Predictor variables used in past top-performing models are largely based on source code-oriented metrics, such as lines of code or number of changes. However, source code is the end product of numerous interlaced and collaborative activities carried out by developers. Traces of such activities can be found in the various repositories used to manage development efforts. In this paper, we develop statistical models to study the impact of social interactions in a software project on software quality. These models use predictor variables based on social information mined from the issue tracking and version control repositories of two large open-source software projects. The results of our case studies demonstrate the impact of metrics from four different dimensions of social interaction on post-release defects. Our findings show that statistical models based on social information have a similar degree of explanatory power as traditional models. Furthermore, our results demonstrate that social information does not substitute, but rather augments traditional source code-based metrics used in defect prediction models.  相似文献   

14.
When analyzing legacy code, generating a high‐level model of an application during the reverse engineering process helps the developers understand how the application is structured and how the dependencies relate the different software entities. Within the context of procedural programming languages (such as C), the existing approaches to get a model of the code require documentation and/or implicit knowledge that stakeholders acquire during the software building. These approaches use the code itself to build a syntactic model where we see the different software artifacts, such as variables, functions, and modules. However, there is no supporting methodology to detect and analyze if there are relationships/dependencies between those artifacts, such as which variable in a module is declared using an abstract data type described in another one, or which are the functions that are using parameters typed with an abstract data type; or any design decision taken by original developers, such as how the developer has implemented functions in different modules. On the other hand, current developers use object‐oriented (OO) paradigm to implement not only business applications but also useful methodologies/tools that allow semiautomatic analysis of any application. We must remark the legacy procedural code still has worth and is working in several industries, and as any evolving code, the developers have to be able to perform maintenance tasks minimizing the limitations offered by the language. Based on useful properties that the OO paradigm (and their supporting analysis tools) provide, such as UML models, we propose M2K as a methodology to generate a high‐level model from legacy procedural code, mainly written in Ansi C. To understand how C‐based applications were implemented is not a new problem in software reengineering. However, our contribution is based on building an OO model and suggesting different refactorings that help the developer to improve it and to eventually guide a new implementation of the target application. Specifically, the methodology builds cohesive software entities mapped from procedural code and makes the coupling between C entities explicit in the high‐level model. The result of our methodology is a set of refactored class candidates: a structure that groups a set of variables and a set of functions obtained from the C applications. Based on the class candidate model, we propose refactorings based on OO design principles to improve the design of the application. The most relevant design improvements were obtained with algorithm abstraction by applying the strategy pattern, attributes/methods relocalization, variables types generalization, and removing/renaming methods/attributes. Besides a methodology and the supporting tool, we provide 14 case studies based on real projects implemented in C, and we showed how the results validate our proposal.  相似文献   

15.
ContextCompanies increasingly strive to adapt to market and ecosystem changes in real time. Gauging and understanding team performance in such changing environments present a major challenge.ObjectiveThis paper aims to understand how software developers experience the continuous adaptation of performance in a modern, highly volatile environment using Lean and Agile software development methodology. This understanding can be used as a basis for guiding formation and maintenance of high-performing teams, to inform performance improvement initiatives, and to improve working conditions for software developers.MethodA qualitative multiple-case study using thematic interviews was conducted with 16 experienced practitioners in five organisations.ResultsWe generated a grounded theory, Performance Alignment Work, showing how software developers experience performance. We found 33 major categories of performance factors and relationships between the factors. A cross-case comparison revealed similarities and differences between different kinds and different sizes of organisations.ConclusionsBased on our study, software teams are engaged in a constant cycle of interpreting their own performance and negotiating its alignment with other stakeholders. While differences across organisational sizes exist, a common set of performance experiences is present despite differences in context variables. Enhancing performance experiences requires integration of soft factors, such as communication, team spirit, team identity, and values, into the overall development process. Our findings suggest a view of software development and software team performance that centres around behavioural and social sciences.  相似文献   

16.
There is empirical evidence that internal software quality, e.g., the quality of source code, has great impact on the overall quality of software. Besides well-known manual inspection and review techniques for source code, more recent approaches utilize tool-based static code analysis for the evaluation of internal software quality. Despite the high potential of code analyzers the application of tools alone cannot replace well-founded expert opinion. Knowledge, experience and fair judgment are indispensable for a valid, reliable quality assessment, which is accepted by software developers and managers. The EMISQ method (Evaluation Method for Internal Software Quality), guides the assessment process for all stakeholders of an evaluation project. The method is supported by the Software Product Quality Reporter (SPQR), a tool which assists evaluators with their analysis and rating tasks and provides support for generating code quality reports. The application of SPQR has already proved its usefulness in various code assessment projects around the world. This paper introduces the EMISQ method and describes the tool support needed for an efficient and effective evaluation of internal software quality.  相似文献   

17.
As software systems continue to play an important role in our daily lives, their quality is of paramount importance. Therefore, a plethora of prior research has focused on predicting components of software that are defect-prone. One aspect of this research focuses on predicting software changes that are fix-inducing. Although the prior research on fix-inducing changes has many advantages in terms of highly accurate results, it has one main drawback: It gives the same level of impact to all fix-inducing changes. We argue that treating all fix-inducing changes the same is not ideal, since a small typo in a change is easier to address by a developer than a thread synchronization issue. Therefore, in this paper, we study high impact fix-inducing changes (HIFCs). Since the impact of a change can be measured in different ways, we first propose a measure of impact of the fix-inducing changes, which takes into account the implementation work that needs to be done by developers in later (fixing) changes. Our measure of impact for a fix-inducing change uses the amount of churn, the number of files and the number of subsystems modified by developers during an associated fix of the fix-inducing change. We perform our study using six large open source projects to build specialized models that identify HIFCs, determine the best indicators of HIFCs and examine the benefits of prioritizing HIFCs. Using change factors, we are able to predict 56 % to 77 % of HIFCs with an average false alarm (misclassification) rate of 16 %. We find that the lines of code added, the number of developers who worked on a change, and the number of prior modifications on the files modified during a change are the best indicators of HIFCs. Lastly, we observe that a specialized model for HIFCs can provide inspection effort savings of 4 % over the state-of-the-art models. We believe our results would help practitioners prioritize their efforts towards the most impactful fix-inducing changes and save inspection effort.  相似文献   

18.
The results of a study of software enhancement projects involving identical information requirements are reported. A sample drawn from the 200 largest commercial banks in the United States was examined to determine the levels of programming, systems analysis, and project management effort necessary to implement interest reporting requirements of the U.S. Tax Equity and Fiscal Responsibility Act (TEFRA) of 1982. Results of the study indicate that firms using structured systems design and programming techniques expended twice as much effort to implement the TEFRA requirements as those using non-structured approaches. It was also found that firms purchasing software expended nearly the same amount of effort as those using traditional systems analysis and programming techniques.  相似文献   

19.
Object-oriented metrics aim to exhibit the quality of source code and give insight to it quantitatively. Each metric assesses the code from a different aspect. There is a relationship between the quality level and the risk level of source code. The objective of this paper is to empirically examine whether or not there are effective threshold values for source code metrics. It is targeted to derive generalized thresholds that can be used in different software systems. The relationship between metric thresholds and fault-proneness was investigated empirically in this study by using ten open-source software systems. Three types of fault-proneness were defined for the software modules: non-fault-prone, more-than-one-fault-prone, and more-than-three-fault-prone. Two independent case studies were carried out to derive two different threshold values. A single set was created by merging ten datasets and was used as training data by the model. The learner model was created using logistic regression and the Bender method. Results revealed that some metrics have threshold effects. Seven metrics gave satisfactory results in the first case study. In the second case study, eleven metrics gave satisfactory results. This study makes contributions primarily for use by software developers and testers. Software developers can see classes or modules that require revising; this, consequently, contributes to an increment in quality for these modules and a decrement in their risk level. Testers can identify modules that need more testing effort and can prioritize modules according to their risk levels.  相似文献   

20.
With the increasing use of object-oriented methods in new software development, there is a growing need to both document and improve current practice in object-oriented design and development. In response to this need, a number of researchers have developed various metrics for object-oriented systems as proposed aids to the management of these systems. In this research, an analysis of a set of metrics proposed by Chidamber and Kemerer (1994) is performed in order to assess their usefulness for practising managers. First, an informal introduction to the metrics is provided by way of an extended example of their managerial use. Second, exploratory analyses of empirical data relating the metrics to productivity, rework effort and design effort on three commercial object-oriented systems are provided. The empirical results suggest that the metrics provide significant explanatory power for variations in these economic variables, over and above that provided by traditional measures, such as size in lines of code, and after controlling for the effects of individual developers  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号