首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Boehm  B. Li Guo Huang 《Computer》2003,36(3):33-41
The information technology field's accelerating rate of change makes feedback control essential for organizations to sense, evaluate, and adapt to changing value propositions in their competitive marketplace. Although traditional project feedback control mechanisms can manage the development efficiency of stable projects in well-established value situations, they do little to address the project's actual value, and can lead to wasteful misuse of an organization's scarce resources. The value-based approach to software development integrates value considerations into current and emerging software engineering principles and practices, while developing an overall framework in which these techniques compatibly reinforce each other.  相似文献   

2.
Many software quality initiatives fail because they do not take account of the range of views that people have of quality. New approaches to software quality improvement will not work unless software developers believe in them, no matter how enthusiastic managers may be. This paper reports on a pilot study using the repertory grid technique that found evidence to support these assertions. The study findings justify further work and show that while the repertory grid technique is an appropriate instrument in this area it is resource intensive to apply and may not be practical in a wider study of a representative sample of the IT industry. The paper has practical recommendations for successful introduction of new software quality programmes. These recommendations stress the need for effective communication, leading to a shared understanding of quality, and for realistic goals that recognize the pressure of development schedules.  相似文献   

3.
Software evolution studies have traditionally focused on individual products. In this study we scale up the idea of software evolution by considering software compilations composed of a large quantity of independently developed products, engineered to work together. With the success of libre (free, open source) software, these compilations have become common in the form of ‘software distributions’, which group hundreds or thousands of software applications and libraries into an integrated system. We have performed an exploratory case study on one of them, Debian GNU/Linux, finding some significant results. First, Debian has been doubling in size every 2 years, totalling about 300 million lines of code as of 2007. Second, the mean size of packages has remained stable over time. Third, the number of dependencies between packages has been growing quickly. Finally, while C is still by far the most commonly used programming language for applications, use of the C++, Java, and Python languages have all significantly increased. The study helps not only to understand the evolution of Debian, but also yields insights into the evolution of mature libre software systems in general.
Daniel M. GermanEmail:

Jesus M. Gonzalez-Barahona   teaches and researches in Universidad Rey Juan Carlos, Mostoles (Spain). His research interests include libre software development, with a focus on quantitative and empirical studies, and distributed tools for collaboration in libre software projects. He works in the GSyC/LibreSoft research team, . Gregorio Robles   is Associate Professor at the Universidad Rey Juan Carlos, where he earned his PhD in 2006. His research interests lie in the empirical study of libre software, ranging from technical issues to those related to the human resources of the projects. Martin Michlmayr   has been involved in various free and open source software projects for well over 10 years. He acted as the leader of the Debian project for two years and currently serves on the board of the Open Source Initiative (OSI). Martin works for HP as an Open Source Community Expert and acts as the community manager of FOSSBazaar. Martin holds Master degrees in Philosophy, Psychology and Software Engineering, and earned a PhD from the University of Cambridge. Juan José Amor   has a M.Sc. in Computer Science from the Universidad Politécnica de Madrid and he is currently pursuing a Ph.D. at the Universidad Rey Juan Carlos, where he is also a project manager. His research interests are related to libre software engineering, mainly effort and schedule estimates in libre software projects. Since 1995 he has collaborated in several libre software organizations; he is also co-founder of LuCAS, the best known libre software documentation portal in Spanish, and Hispalinux, the biggest spanish Linux user group. He also collaborates with and Linux+. Daniel M. German   is associate professor of computer science at the University of Victoria, Canada. His main areas of interest are software evolution, open source software engineering and intellectual property.   相似文献   

4.
To address the issues of software product quality, the Joint Technical Committee 1 of the International Organization for Standardization and International Electrotechnical Commission published a set of software product quality standards known as ISO/IEC 9126. These standards specify software product quality's characteristics and subcharacteristics and their metrics. Based on a user survey, this study of the standard helps clarity quality attributes and provides guidance for the resulting standards.  相似文献   

5.
This paper presents a case history of Mentor Graphics using a set of quality metrics to track development progress for a recent major software release. It provides background on how Mentor Graphics originally began using software metrics to measure product quality, how this became accepted, and how these metrics later fell out of favour. To restore these metrics to effective use, process changes were required for setting quality and metric targets, and for the way the metrics are used for tracking development progress. With these process changes in place, and the addition of a new metric, the case history demonstrates that the metric set could be used effectively to indicate problems in this release and help manage changes to the plan for completion of the release. The lessons learned in this case history are presented, along with subsequent data that further validates these metrics.  相似文献   

6.
In 1995, Watts Humphrey introduced the Personal Software Process in his book, A Discipline for Software Engineering (Addison Wesley Longman, Reading, Mass.). Programmers who use the PSP gather measurements related to their own work products and the process by which they were developed, then use these measures to drive changes to their development behavior. The PSP focuses on defect reduction and estimation improvement as the two primary goals of personal process improvement. Through individual collection and analysis of personal data, the PSP shows how individuals can implement empirically guided software process improvement. The full PSP curriculum leads practitioners through a sequence of seven personal processes. The first and most simple PSP process, PSPO, requires that practitioners track time and defect data using a Time Recording Log and Defect Recording Log, then fill out a detailed Project Summary Report. Later processes become more complicated, introducing size and time estimation, scheduling, and quality management practices such as defect density prediction and cost-of-quality analyses. After almost three years of teaching and using the PSP, we have experienced its educational benefits. As researchers, however, we have also uncovered evidence of certain limitations. We believe that awareness of these limitations can help improve appropriate adoption and evaluation of the method by industrial and academic practitioners  相似文献   

7.
The goal of the GUARDS project is to design and develop a generic fault-tolerant computer architecture that can be built from predefined standardised components. The architecture favours the use of commercial off-the-shelf (COTS) hardware and software components. However, the assessment and selection of COTS components is a non-trivial task as it requires balancing a myriad of requirements from end-users and the preliminary architecture design. In this paper, we present the requirements and assessment criteria for a specific COTS software component, the operating system kernel. As an interface specification constitutes a major compatibility criterion for the selection of COTS components in GUARDS, a particular emphasis is placed on operating system conformance to the POSIX 1003.1 standard. We discuss the general lessons learned from the assessment process and raise a number of questions relevant to the assessment of any COTS software component.  相似文献   

8.
9.
10.
11.
A critical problem in software development is the monitoring, control and improvement in the processes of software developers. Software processes are often not explicitly modeled, and manuals to support the development work contain abstract guidelines and procedures. Consequently, there are huge differences between ‘actual’ and ‘official’ processes: “the actual process is what you do, with all its omissions, mistakes, and oversights. The official process is what the book, i.e., a quality manual, says you are supposed to do” (Humphrey in A discipline for software engineering. Addison-Wesley, New York, 1995). Software developers lack support to identify, analyze and better understand their processes. Consequently, process improvements are often not based on an in-depth understanding of the ‘actual’ processes, but on organization-wide improvement programs or ad hoc initiatives of individual developers. In this paper, we show that, based on particular data from software development projects, the underlying software development processes can be extracted and that automatically more realistic process models can be constructed. This is called software process mining (Rubin et al. in Process mining framework for software processes. Software process dynamics and agility. Springer Berlin, Heidelberg, 2007). The goal of process mining is to better understand the development processes, to compare constructed process models with the ‘official’ guidelines and procedures in quality manuals and, subsequently, to improve development processes. This paper reports on process mining case studies in a large industrial company in The Netherlands. The subject of the process mining is a particular process: the change control board (CCB) process. The results of process mining are fed back to practice in order to subsequently improve the CCB process.  相似文献   

12.
13.
This paper was motivated by a request to review relative operations performance for various fabrication facilities within a leading Taiwanese semiconductor manufacturer. Performance evaluation is important but often controversial. To dispel the controversy, we propose a two-stage fabrication process model to systematically analyze metrics currently adopted, and show that the commonly used wafer-based indices are biased for operations performance. Instead, they should be decomposed into productivity, representing true operations performance, and manufacturability. We suggest the use of data envelopment analysis because of its confirmed linkages to other widely used productivity measures and its overall performance via relative comparisons. The case study illustrates how the two-stage model evaluates and analyzes real-world operations, and the empirical results show the drawbacks of conventional methods.  相似文献   

14.
Early quality prediction: a case study in telecommunications   总被引:2,自引:0,他引:2  
Predicting the quality of modules lets developers focus on potential problems and make improvements earlier in development, when it is more cost-effective. The authors applied discriminant analysis to identify fault-prone modules in a large telecommunications system prior to testing  相似文献   

15.
Given the complexity of many contemporary software systems, it is often difficult to gauge the overall quality of their underlying software components. A potential technique to automatically evaluate such qualitative attributes is to use software metrics as quantitative predictors. In this case study, an aggregation technique based on fuzzy integration is presented that combines the predicted qualitative assessments from multiple classifiers. Multiple linear classifiers are presented with randomly selected subsets of automatically generated software metrics describing components from a sophisticated biomedical data analysis system. The external reference test is a software developer’s thorough assessment of complexity, maintainability, and usability, which is used to assign corresponding quality class labels to each system component. The aggregated qualitative predictions using fuzzy integration are shown to be superior to the predictions from the respective best single classifiers.  相似文献   

16.
17.
In our study, we attempted to further investigate how Web 2.0 technologies influence workplace learning. Our particular interest was on using Wiki as a tool for corporate exchange of knowledge with the focus on informal learning. In this study, we collaborated with a multinational software development company that uses Wiki as a corporate tool since 2001. For our research, we used three different sources for acquisition of data. Primarily, we did an interview with top management. Next we acquired the data on usage statistics from the company Wiki. And finally we distributed a questionnaire in order to acquire users' feedback. Analysis provided many interesting results. One of the main conclusions is that Wiki is successfully used in this company, and large majority of employees finds it useful. Additionally, Wiki did aid informal learning, but there is still plenty of room for improvement.  相似文献   

18.
In the software product line research, product variants typically differ by their functionality and quality attributes are not purposefully varied. The goal is to study purposeful performance variability in software product lines, in particular, the motivation to vary performance, and the strategy for realizing performance variability in the product line architecture. The research method was a theory-building case study that was augmented with a systematic literature review. The case was a mobile network base station product line with capacity variability. The data collection, analysis and theorizing were conducted in several stages: the initial case study results were augmented with accounts from the literature. We constructed three theoretical models to explain and characterize performance variability in software product lines: the models aim to be generalizable beyond the single case. The results describe capacity variability in a base station product line. Thereafter, theoretical models of performance variability in software product lines in general are proposed. Performance variability is motivated by customer needs and characteristics, by trade-offs and by varying operating environment constraints. Performance variability can be realized by hardware or software means; moreover, the software can either realize performance differences in an emergent way through impacts from other variability or by utilizing purposeful varying design tactics. The results point out two differences compared with the prevailing literature. Firstly, when the customer needs and characteristics enable price differentiation, performance may be varied even with no trade-offs or production cost differences involved. Secondly, due to the dominance of feature modeling, the literature focuses on the impact management realization. However, performance variability can be realized through purposeful design tactics to downgrade the available software resources and by having more efficient hardware.  相似文献   

19.
We use 810 versions of the Linux kernel, released over a period of 14 years, to characterize the system’s evolution, using Lehman’s laws of software evolution as a basis. We investigate different possible interpretations of these laws, as reflected by different metrics that can be used to quantify them. For example, system growth has traditionally been quantified using lines of code or number of functions, but functional growth of an operating system like Linux can also be quantified using the number of system calls. In addition we use the availability of the source code to track metrics, such as McCabe’s cyclomatic complexity, that have not been tracked across so many versions previously. We find that the data supports several of Lehman’s laws, mainly those concerned with growth and with the stability of the process. We also make some novel observations, e.g. that the average complexity of functions is decreasing with time, but this is mainly due to the addition of many small functions.  相似文献   

20.
The aim of this work is to measure the impact of aspect-oriented programming on software performance. Thus, we hypothesized as follow: adding aspects to a base program will affect its performance because of the overhead caused by the control flow switching, and that incremental effect on performance is more obvious as the number of join points increases. To test our hypotheses we carried out a case study of two concurrent architectures: Half-Sync/Half-Async and Leader/Followers. Aspects are extracted and encapsulated and the base program performance was compared to the aspect program. Our results show that the aspect-oriented approach does not have significant effect on the performance and that in some cases an aspect-oriented program even outperforms the non-aspect program. We also investigated the effect of cache fault rate on performance for both aspect and non-aspect programs. Based on our experiments, the results demonstrate that there is a close correlation between the cache fault rate and performance, which may be in favor of aspect code if some aspects are frequently accessed. Additionally, the introduction of a large number of join points does not have significant effect on performance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号