共查询到20条相似文献,搜索用时 15 毫秒
1.
Taghi M. Khoshgoftaar Edward B. Allen Kalai S. Kalaichelvan Nishith Goel 《Empirical Software Engineering》1996,1(1):31-44
This paper presents a case study of a software project in the maintenance phase. The case study was based on a sample of modules, representing about 1.3 million lines of code, from a very large telecommunications system. Software quality models were developed to predict the number of faults expected from the coding through operations phases. Since modules from the prior release were often reused to develop a new release, one model incorporated reuse data as additional independent variables. We compare this model's performance to a similar model without reuse data.Software quality models often have product metrics as the only input data for predicting quality. There is an implicit assumption that all the modules have had a similar development history, so that product attributes are the primary drivers of different quality levels. Reuse of software as components and software evolution do not fit this assumption very well, and consequently, traditional models for such environments may not have adequate accuracy. Focusing on the software maintenance phase, this study demonstrated that reuse data can significantly improve the predictive accuracy of software quality models. 相似文献
2.
3.
Houari Sahraoui Lionel C. Briand Yann-Gaël Guéhéneuc Olivier Beaurepaire 《Information and Software Technology》2010,52(9):923-933
ContextMeasurement programs have been around for several decades but have been often misused or misunderstood by managers and developers. This misunderstanding prevented their adoption despite their many advantages.ObjectiveIn this paper, we present the results of an empirical study on the impact of a measurement program, MQL (“Mise en Qualité du Logiciel”, French for “Quality Software Development”), in an industrial context.MethodWe analyzed data collected on 44 industrial systems of different sizes: 22 systems were developed using MQL while the other 22 used ad-hoc approaches to assess and control quality (control group, referred to as “ad-hoc systems”). We studied the impact of MQL on a set of nine variables: six quality factors (maintainability, evolvability, reusability, robustness, testability, and architecture quality), corrective-maintenance effort, code complexity, and the presence of comments.ResultsOur results show that MQL had a clear positive impact on all the studied indicators. This impact is statistically significant for all the indicators but corrective-maintenance effort.ConclusionWe bring concrete evidence that a measurement program can have a significant, positive impact on the quality of software systems if combined with appropriate decision making procedures and corrective actions. 相似文献
4.
5.
Context
Software quality is considered to be one of the most important concerns of software production teams. Additionally, design patterns are documented solutions to common design problems that are expected to enhance software quality. Until now, the results on the effect of design patterns on software quality are controversial.Aims
This study aims to propose a methodology for comparing design patterns to alternative designs with an analytical method. Additionally, the study illustrates the methodology by comparing three design patterns with two alternative solutions, with respect to several quality attributes.Method
The paper introduces a theoretical/analytical methodology to compare sets of “canonical” solutions to design problems. The study is theoretical in the sense that the solutions are disconnected from real systems, even though they stem from concrete problems. The study is analytical in the sense that the solutions are compared based on their possible numbers of classes and on equations representing the values of the various structural quality attributes in function of these numbers of classes. The exploratory designs have been produced by studying the literature, by investigating open-source projects and by using design patterns. In addition to that, we have created a tool that helps practitioners in choosing the optimal design solution, according to their special needs.Results
The results of our research suggest that the decision of applying a design pattern is usually a trade-off, because patterns are not universally good or bad. Patterns typically improve certain aspects of software quality, while they might weaken some other.Conclusions
Concluding the proposed methodology is applicable for comparing patterns and alternative designs, and highlights existing threshold that when surpassed the design pattern is getting more or less beneficial than the alternative design. More specifically, the identification of such thresholds can become very useful for decision making during system design and refactoring. 相似文献6.
The evolution of a software project is a rich data source for analyzing and improving the software development process. Recently, several research groups have tried to cluster source code artifacts based on information about how the code of a software system evolves. The results of these evolutionary approaches seem promising, but a direct comparison to traditional software clustering approaches based on structural code dependencies is still missing. To fill this gap, we conducted several clustering experiments with an established software clustering tool comparing and combining the evolutionary and the structural approach. These experiments show that the evolutionary approach could produce meaningful clustering results. While the traditional approach provides better results because of a more reliable data density of the structural data, the combination of both approaches is able to improve the overall clustering quality. A review of related studies shows that this approach of combining dependency information is also successful in other software engineering applications. 相似文献
7.
The paper presents the findings of a survey that investigated the level of quality management practice within some 150 UK companies from a sample of 500. It provides a snapshot of practice at the time of the survey, and assesses the impact of government quality initiatives particularly, the TickIT scheme at that time. The survey methodology is described, together with the results and conclusions. The sample has been graded by size of company, which the authors consider to have a significant effect upon the adoption of quality practices. The survey highlights the need to encourage small companies to adopt quality practices and to assist them with the short-term costs incurred. 相似文献
8.
Shane McIntosh Yasutaka Kamei Bram Adams Ahmed E. Hassan 《Empirical Software Engineering》2016,21(5):2146-2189
Software code review, i.e., the practice of having other team members critique changes to a software system, is a well-established best practice in both open source and proprietary software domains. Prior work has shown that formal code inspections tend to improve the quality of delivered software. However, the formal code inspection process mandates strict review criteria (e.g., in-person meetings and reviewer checklists) to ensure a base level of review quality, while the modern, lightweight code reviewing process does not. Although recent work explores the modern code review process, little is known about the relationship between modern code review practices and long-term software quality. Hence, in this paper, we study the relationship between post-release defects (a popular proxy for long-term software quality) and: (1) code review coverage, i.e., the proportion of changes that have been code reviewed, (2) code review participation, i.e., the degree of reviewer involvement in the code review process, and (3) code reviewer expertise, i.e., the level of domain-specific expertise of the code reviewers. Through a case study of the Qt, VTK, and ITK projects, we find that code review coverage, participation, and expertise share a significant link with software quality. Hence, our results empirically confirm the intuition that poorly-reviewed code has a negative impact on software quality in large systems using modern reviewing tools. 相似文献
9.
Topic models are generative probabilistic models which have been applied to information retrieval to automatically organize and provide structure to a text corpus. Topic models discover topics in the corpus, which represent real world concepts by frequently co-occurring words. Recently, researchers found topics to be effective tools for structuring various software artifacts, such as source code, requirements documents, and bug reports. This research also hypothesized that using topics to describe the evolution of software repositories could be useful for maintenance and understanding tasks. However, research has yet to determine whether these automatically discovered topic evolutions describe the evolution of source code in a way that is relevant or meaningful to project stakeholders, and thus it is not clear whether topic models are a suitable tool for this task.In this paper, we take a first step towards evaluating topic models in the analysis of software evolution by performing a detailed manual analysis on the source code histories of two well-known and well-documented systems, JHotDraw and jEdit. We define and compute various metrics on the discovered topic evolutions and manually investigate how and why the metrics evolve over time. We find that the large majority (87%–89%) of topic evolutions correspond well with actual code change activities by developers. We are thus encouraged to use topic models as tools for studying the evolution of a software system. 相似文献
10.
Heng?Li Tse-Hsun??Chen Weiyi?Shang Ahmed?E.?Hassan 《Empirical Software Engineering》2018,23(5):2655-2694
Software developers insert logging statements in their source code to record important runtime information; such logged information is valuable for understanding system usage in production and debugging system failures. However, providing proper logging statements remains a manual and challenging task. Missing an important logging statement may increase the difficulty of debugging a system failure, while too much logging can increase system overhead and mask the truly important information. Intuitively, the actual functionality of a software component is one of the major drivers behind logging decisions. For instance, a method maintaining network communications is more likely to be logged than getters and setters. In this paper, we used automatically-computed topics of a code snippet to approximate the functionality of a code snippet. We studied the relationship between the topics of a code snippet and the likelihood of a code snippet being logged (i.e., to contain a logging statement). Our driving intuition is that certain topics in the source code are more likely to be logged than others. To validate our intuition, we conducted a case study on six open source systems, and we found that i) there exists a small number of “log-intensive” topics that are more likely to be logged than other topics; ii) each pair of the studied systems share 12% to 62% common topics, and the likelihood of logging such common topics has a statistically significant correlation of 0.35 to 0.62 among all the studied systems; and iii) our topic-based metrics help explain the likelihood of a code snippet being logged, providing an improvement of 3% to 13% on AUC and 6% to 16% on balanced accuracy over a set of baseline metrics that capture the structural information of a code snippet. Our findings highlight that topics contain valuable information that can help guide and drive developers’ logging decisions. 相似文献
11.
Software Quality Journal - 相似文献
12.
The case study is about the System Monitor and Control Facility (SMCF) workstation product developed by a major telecommunications company that has been used to monitor MVS OS mainframe computer systems since 1983. In 1991, mainframe UNIX systems were added to the list of systems supported using software executing on the mainframe side. In 1994, an effort to develop a common interface using TCP/IP and Remote Procedure Calls (RPC) began with a product being developed in the C. The product, which was officially delivered in June of 1994, was coded using structured programming techniques. However, after the product had been in use for some time, maintaining and extending the code for additional functionality and portability was less than desirable.A decision was made by the programmers who support the host-side code to restructure (re-engineer) it such that certain software engineering principles be included into the product to make the product more maintainable and portable. This paper discusses the factors that led to the initial decisions of the designers and programmers, the evaluation of the existing code, and the resulting code with software engineering principles re-engineered into the existing code, and how the incorporation of these principles make maintenance simpler and how they may prevent or minimize defects in the future. 相似文献
13.
Jie-Cherng Chen Author Vitae Author Vitae 《Journal of Systems and Software》2009,82(6):981-992
Many problem factors in the software development phase affect the maintainability of the delivered software systems. Therefore, understanding software development problem factors can help in not only reducing the incidence of project failure but can also ensure software maintainability. This study focuses on those software development problem factors which may possibly affect software maintainability. Twenty-five problem factors were classified into five dimensions; a questionnaire was designed and 137 software projects were surveyed. A K-means cluster analysis was performed to classify the projects into three groups of low, medium and high maintainability projects. For projects which had a higher level of severity of problem factors, the influence on software maintainability becomes more obvious. The influence of software process improvement (SPI) on project problems and the associated software maintainability was also examined in this study. Results suggest that SPI can help reduce the level of severity of the documentation quality and process management problems, and is only likely to enhance software maintainability to a medium level. Finally, the top 10 list of higher-severity software development problem factors was identified, and implications were discussed. 相似文献
14.
It is common knowledge that to stay competitive, your software organization must continuously improve product quality and customer satisfaction, as well as lower software development costs and shorten delivery time. The paper considers how software tools are an effective way to improve software development variables such as productivity and product quality. It considers software tool selection and process improvement costs 相似文献
15.
Human, social and organisational (HSO) factors play a decisive role in software development in terms of determining functional and non-functional characteristics of software products. The significance of these factors is underlined by the need to produce applications that fit nicely in a working setting, supporting the working procedures followed and promoting users' content and productivity. In this context, a new requirements elicitation process is proposed, a part of which utilises a short-scale ethnography analysis. The process introduces specific steps for recording HSO factors based on certain software quality characteristics that are treated as principal components for conducting requirements identification. The output of the process is the HSO document, which can be used in conjunction with the classic requirements document to identify structural and functional aspects of the system. 相似文献
16.
Emad Shihab Akinori Ihara Yasutaka Kamei Walid M. Ibrahim Masao Ohira Bram Adams Ahmed E. Hassan Ken-ichi Matsumoto 《Empirical Software Engineering》2013,18(5):1005-1042
Bug fixing accounts for a large amount of the software maintenance resources. Generally, bugs are reported, fixed, verified and closed. However, in some cases bugs have to be re-opened. Re-opened bugs increase maintenance costs, degrade the overall user-perceived quality of the software and lead to unnecessary rework by busy practitioners. In this paper, we study and predict re-opened bugs through a case study on three large open source projects—namely Eclipse, Apache and OpenOffice. We structure our study along four dimensions: (1) the work habits dimension (e.g., the weekday on which the bug was initially closed), (2) the bug report dimension (e.g., the component in which the bug was found) (3) the bug fix dimension (e.g., the amount of time it took to perform the initial fix) and (4) the team dimension (e.g., the experience of the bug fixer). We build decision trees using the aforementioned factors that aim to predict re-opened bugs. We perform top node analysis to determine which factors are the most important indicators of whether or not a bug will be re-opened. Our study shows that the comment text and last status of the bug when it is initially closed are the most important factors related to whether or not a bug will be re-opened. Using a combination of these dimensions, we can build explainable prediction models that can achieve a precision between 52.1–78.6 % and a recall in the range of 70.5–94.1 % when predicting whether a bug will be re-opened. We find that the factors that best indicate which bugs might be re-opened vary based on the project. The comment text is the most important factor for the Eclipse and OpenOffice projects, while the last status is the most important one for Apache. These factors should be closely examined in order to reduce maintenance cost due to re-opened bugs. 相似文献
17.
Michael Perscheid Benjamin Siegmund Marcel Taeumel Robert Hirschfeld 《Software Quality Journal》2017,25(1):83-110
In 1997, Henry Lieberman stated that debugging is the dirty little secret of computer science. Since then, several promising debugging technologies have been developed such as back-in-time debuggers and automatic fault localization methods. However, the last study about the state-of-the-art in debugging is still more than 15 years old and so it is not clear whether these new approaches have been applied in practice or not.For that reason, we investigate the current state of debuggingin a comprehensive study. First, we review the available literature and learn about current approaches and study results. Second, we observe several professional developers while debugging and interview them about their experiences. Third, we create a questionnaire that serves as the basis for a larger online debugging survey. Based on these results, we present new insights into debugging practice that help to suggest new directions for future research. 相似文献
18.
J. H. Poore 《Software》1988,18(11):1017-1027
Software is a product in serious need of quality control technology. Major effort notwithstanding, software engineering has produced few metrics for aspects of software quality that have the potential of being universally applicable. The present paper suggests that, although universal metrics are elusive, metrics that are applicable and useful in a fully defined setting are readily available. A theory is presented that a well-defined software work group can articulate their operational concept of quality and derive useful metrics for that concept and their environment. 相似文献
19.
As software systems continue to play an important role in our daily lives, their quality is of paramount importance. Therefore, a plethora of prior research has focused on predicting components of software that are defect-prone. One aspect of this research focuses on predicting software changes that are fix-inducing. Although the prior research on fix-inducing changes has many advantages in terms of highly accurate results, it has one main drawback: It gives the same level of impact to all fix-inducing changes. We argue that treating all fix-inducing changes the same is not ideal, since a small typo in a change is easier to address by a developer than a thread synchronization issue. Therefore, in this paper, we study high impact fix-inducing changes (HIFCs). Since the impact of a change can be measured in different ways, we first propose a measure of impact of the fix-inducing changes, which takes into account the implementation work that needs to be done by developers in later (fixing) changes. Our measure of impact for a fix-inducing change uses the amount of churn, the number of files and the number of subsystems modified by developers during an associated fix of the fix-inducing change. We perform our study using six large open source projects to build specialized models that identify HIFCs, determine the best indicators of HIFCs and examine the benefits of prioritizing HIFCs. Using change factors, we are able to predict 56 % to 77 % of HIFCs with an average false alarm (misclassification) rate of 16 %. We find that the lines of code added, the number of developers who worked on a change, and the number of prior modifications on the files modified during a change are the best indicators of HIFCs. Lastly, we observe that a specialized model for HIFCs can provide inspection effort savings of 4 % over the state-of-the-art models. We believe our results would help practitioners prioritize their efforts towards the most impactful fix-inducing changes and save inspection effort. 相似文献
20.
软件企业更多的时候是在为客户提供服务而不是像买卖一件商品那么简单,改善客户满意度应该是一个着眼于未来的软件企业不断追求的目标。那么软件企业如何才能为用户提供高质量的产品呢?本文将从质量策略、质量管理过程、质量管理常用技术和工具、QA的工作等几个方面探讨如何更好地保证软件的质量。 相似文献