首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Zhang  Y. Sheth  D. 《Software, IEEE》2006,23(1):82-90
One major reason software development projects fail is that the development process is invisible. Managers tend to rely on meetings and reports to understand project status and make decisions, leading to mismanagement due to inaccurate or incomplete information. Software projects typically collect information during development using different tools and store it in repositories. We present a statistical process control (SPC) method of defining, collecting, and analyzing software metrics from software repositories for MDD process control and improvement (PCI). This method, which we call mining software repositories (MSR), can help us change the traditional, static, record-keeping use of software data repositories to a new, active use for predicting and planning various aspects of MDD projects.  相似文献   

2.
ISO 9000 certification is becoming an important competitive factor for foundries. Considerable organizational effort and expertise are needed to achieve this certification, which may be too time-consuming and expensive for many small to medium-sized companies. This paper will present the development of an object-oriented system to support the ISO certification process, with particular emphasis on clause 4.20 (use of statistical techniques). Recommendations for the design and implementation of each statistical tool are provided by the system, as well as training examples of suitable process monitoring and improvement procedures. The system runs on a microcomputer platform and was constructed utilizing Visual BASIC.  相似文献   

3.
The last 20 years have seen important advances on statistical experimental design. These advances have been both in methodology and in the scope of applications. Of particular interest is the application of designed experiments for process and product design, development and improvement. We present a three-phase methodology for process development and improvement based on statistically designed experiments, along with a comprehensive example for a surface mount technology process in the electronics industry.  相似文献   

4.
The Principal Component Analysis is one of most applied dimensionality reduction techniques for process monitoring and fault diagnosis in industrial process. This work proposes a procedure based on the discriminant information contained in the principal components to determine the most significant ones in fault separability. The Tennessee Eastman Process industrial benchmark is used to illustrate the effectiveness of the proposal. The use of statistical hypothesis tests as a separability measure between multiple failures is proposed for the selection of the principal components. The classifier profile concept has been introduced for comparison purposes. Results show an improvement in the classification process when compared with traditional techniques and the StepWise selection. This has resulted in a better classification for a fixed number of components, or a smaller number of required components to obtain a prefixed error rate. In addition, the computational advantage is demonstrated.  相似文献   

5.
Current software process models (CMM, SPICE, etc.) strongly recommend the application of statistical control and measure guides to define, implement, and evaluate the effects of different process improvements. However, whilst quantitative modeling has been widely used in other fields, it has not been considered enough in the field of software process improvement. During the last decade software process simulation has been used to address a wide diversity of management problems. Some of these problems are related to strategic management, technology adoption, understanding, training and learning, and risk management, among others. In this work a dynamic integrated framework for software process improvement is presented. This framework combines traditional estimation models with an intensive utilization of dynamic simulation models of the software process. The aim of this framework is to support a qualitative and quantitative assessment for software process improvement and decision making to achieve a higher software development process capability according to the Capability Maturity Model. The concepts underlying this framework have been implemented in a software process improvement tool that has been used in a local software organization. The results obtained and the lessons learned are also presented in this paper.  相似文献   

6.
The adoption of quality assurance methods based on software process improvement models has been regarded as an important source of variability in software productivity. Some companies perceive that their implementation has prohibitive costs, whereas some authors identify in their use a way to comply with software development patterns and standards, produce economic value and lead to corporate performance improvement. In this paper, we investigate the relationship between quality maturity levels and labor productivity, using a data set containing 687 Brazilian software firms. We study here the relationship between labor productivity, as measured through the annual gross revenue per worker ratio, and quality levels, which were appraised from 2006 to 2012 according to two distinct software process improvement models: MPS.BR and CMMI. We perform independent statistical tests using appraisals carried out according to each of these models, consequently obtaining a data set with as many observations as possible, in order to seek strong support for our research. We first show that MPS.BR and CMMI appraised quality maturity levels are correlated, but we find no statistical evidence that they are related to higher labor productivity or productivity growth. On the contrary, we present evidence suggesting that average labor productivity is higher in software companies without appraised quality levels. Moreover, our analyses suggest that companies with appraised quality maturity levels are more or less productive depending on factors such as their business nature, main origin of capital and maintained quality level.  相似文献   

7.
In industrial manufacturing, most batch processes are inherently multistage/multiphase in nature. To ensure both quality consistency of the manufactured products and safe operation of this kind of batch process, different multivariate statistical process control (MSPC) methods have been proposed in recent years. This paper gives an overview of multistage/multiphase statistical process control methods used for process analysis, monitoring, quality prediction and online quality improvement. Different types of phase divisions and modeling strategies are introduced and the method properties are discussed. For comparisons, a selection guide to these methods for different application purposes is provided. Finally, some promising research directions are suggested based on existing works.  相似文献   

8.
The difficulties of achieving social acceptance for Software Quality Management systems have been underestimated in the past, and they will be exacerbated in the future by the globalization of the software market and the increasing use of cross-cultural development teams within multinational companies. Management that can take account of the cultural context of their endeavours will improve understanding, minimize risk and ensure a higher degree of success in improvement programs within the software industry.This paper addresses cross-cultural issues in Software Quality Management. Qualitative and quantitative research was carried out in five European countries by using a postal questionnaire. Empirical measures of organizational culture, national culture and their interdependence, are presented together with interim instruments developed for the purpose of classifying organizations. Verification of the statistical results from the survey was carried out by triangulation, which included qualitative research methods in the form of interviews and observation. Cultural factors, which may have bearing on successful adoption and implementation of Software Quality Management were identified, and an assessment model, has been developed for use by organizations developing software in different parts of the world. The intention is that the recommendations following from the assessment will lead to greater cultural awareness in addressing quality, and will provide stimulus for improvement. The model's aims is to predict to what degree there is a fit between the organizational and the national culture, and to give recommendations and guidelines for software process improvement.  相似文献   

9.
This paper investigates the forecasting accuracy of fuzzy extended group decisions in the adjustment of statistical benchmark results. DELPHI is a frequently used method for implementing accurate group consensus decisions. The concept of consensus is subject to expert characteristics and it is sometimes ensured by a facilitator’s judgment. Fuzzy set theory deals with uncertain environments and has been adapted for DELPHI, called fuzzy-DELPHI (FD). The present paper extends the recent literature via an implementation of FD for the adjustment of statistical predictions. We propose a fuzzy-DELPHI adjustment process for improvement of accuracy and introduced an empirical study to illustrate its performance in the validation of adjustments of statistical forecasts in the dry bulk shipping index.  相似文献   

10.
This paper reports on the study of the “underbody front”-automated welding cell at Opel Belgium, a major automobile manufacturer of General Motors International Operations. It employs the use of simulation in an experimental design framework to identify potential improvements in average daily output through management of buffer sizes at key buffer locations within the cell. Many practical applications of animated computer simulation stop at the modeling and displaying of the process under study. Simulation as a tool for process reengineering or enhancement can only reach its full potential if incorporated in a comprehensive statistical study, so as to attain statistically significant results. The paper also reports on the reactions of, and issues raised by, management when the experimental design methodology was presented as a tool for process enhancement and productivity improvement.  相似文献   

11.
Methods of analysis and design for inelastic deformation processes with random input parameters are exposed. Stochastic analysis makes use either of function evaluations for a synthetic statistical input sample (Monte Carlo technique) or of a Taylor-series expansion of the response around the mean input. The latter relates to the perturbation method, extended over the duration of the deformation process following temporal integration. Tasks concerning process design are defined and the employment of the stochastic analysis techniques is discussed for exploring the parameter space as well as for design improvement and optimization. The suitability of either analysis technique for performing the particular tasks is pointed out. The design problem raises the issue of robustness to input scatter. Robust optimization respects both the mean and the variance of the design objective via the desirability function, a weighted combination of the two quantities. The exploration potential of synthetic sampling is demonstrated by an application where sensitivity-based techniques prove inadequate because of the local nature of the approximation.  相似文献   

12.
Inductive learning is a method for automated knowledge acquisition. It converts a set of training data into a knowledge structure. In the process of knowledge induction, statistical techniques can play a major role in improving performance. In this paper, we investigate the competition and integration between the traditional statistical and the inductive learning methods. First, the competition between these two approaches is examined. Then, a general framework for integrating these two approaches is presented. This framework suggests three possible integrations: (1) statistical methods as preprocessors for inductive learning, (2) inductive learning methods as preprocessors for statistical classification, and (3) the combination of the two methods to develop new algorithms. Finally, empirical evidence concerning these three possible integrations are discussed. The general conclusion is that algorithms integrating statistical and inductive learning concepts are likely to make the most improvement in performance.  相似文献   

13.
The main purpose of this article is to show how one can integrate statistical and nonstatistical items of evidence in the belief function framework. First, we use the properties of consonant belief functions to define the belief that the true mean of a variable lies in a given interval when a statistical test is performed for the variable. Second, we use the above definition to determine the sample size for a statistical test when a desired level of belief is needed from the sample. Third, we determine the level of belief that the true mean lies in a given interval when a statistical test yields certain values for the sample mean and the standard deviation of the mean for the variable. Finally, we use the auditing situation to illustrate the process of integrating statistical and nonstatistical items evidence. © 1994 John Wiley & Sons, Inc.  相似文献   

14.
Software organizations can significantly improve the quality of their output if they have a defined and documented software process, together with the appropriate techniques and tools to measure its effectiveness. Without a defined process it is impossible to measure success or focus on how development capability can be enhanced. To date, a number of software process improvement frameworks have been developed and implemented. However, most of these models have been targeted at large-scale producers. Furthermore, they have applied to companies who use traditional development techniques. Smaller companies and those operating in development areas where speed of delivery is paramount have not, as yet, had process improvement paradigms available for adoption.This study examined the software process in a small company and emerged with the recommendation of the use of the Dynamic Systems Development Method (DSDM) and the Personal Software Process (PSP) for achieving software process improvement.  相似文献   

15.
面向复用和基于构件的开发方法已经成为新的软件开发范例,许多企业也已经或计划引入这种新的软件开发方式。与一般的过程能力改进一样,这些企业也面临着如何评估自身当前软件复用能力从而制定下一步改进计划的问题。然而,当前流行的CMM/CMMI以及SPICE等过程标准都缺少面向复用和构件化开发过程的剪裁和定制,这在一定程度上阻碍了软件企业采用新的基于复用的开发方法。本文对软件复用能力评估和改进相关的工业实践和研究情况进行了总结,在此基础上提出了一种系统的软件复用能力评估框架。该框架为企业面向复用的开发过程提供了一种阶段式的评估框架,因此可以为企业面向复用能力的过程改进提供相应的指导和决策依据。本文还对软件复用能力评估框架的实施过程进行了探讨。  相似文献   

16.
This paper presents a case study of the installation and use of an electronic process guide within a small-to-medium software development company. The purpose of the study is to better understand how software engineers use this technology so that it can be improved and better used to support software process improvement. In the study the EPG was used to guide new processes in a software improvement programme. The use of the EPG was studied over a period of 8 months with data collected through access logs, by questionnaires and by interviews. The results show that the improvement programme was successful in improving project documentation, project management and the company's relationship with its customers. The EPG contributed to the improvement programme by providing support for the creation of templates for key project documentation, assisting with project planning and estimation and providing a forum for discussion of process and work practices. The biggest improvements that could be made to the EPG would be to provide better navigation tools including a graphical overview of the process, provide tailoring facilities, include examples and experience and link to a project management tool.  相似文献   

17.
With the increased availability of personal computers and statistical software packages, it is inevitable that there will be increasing attempts by clinical investigators to perform data management and statistical analysis. Reviews of statistical packages are abundant in computer and statistical journals. However the majority of them were not written for clinical investigators in medicine. This paper presents an analytic approach to evaluate the suitability of statistical packages for use by clinical investigators for data-management and preliminary statistical-analysis purposes. The evaluation scheme addresses five areas of concern: availability of data-management features; availability of basic statistical-analysis features; ease of use; documentation; and quality of programs. Among six statistical packages reviewed by this process, CRISP is recommended as the most suitable package for clinical investigators to use for data-management and preliminary statistical-analysis purposes.  相似文献   

18.
许多生物序列数据库中都含有大量的冗余序列,这些冗余序列通常不利于对数据库的统计分析和处理,而且它们要占用更多的计算机存储和处理资源.针对这个问题,本文中我们设计了一种去除蛋白质冗余序列的算法.该算法基于图论最大独立集的概念来生成非冗余序列集合,对目前存在的不少蛋白质去冗余程序所采用的由Hobohm和Sander最早设计的一种首先将序列分成若干簇然后取出代表序列的算法进行了改进,使得生成了更多的非冗余代表序列集合,避免了一些非冗余的序列也被去除.我们开发出了实现该算法的程序FastCluster,可以用来去除蛋白质数据库中的冗余序列.  相似文献   

19.
In the domain of high-speed impact between solids, the simulation of one trial entails the use of large resources and an elevated computational cost. The objective of this research is to find the best neural network associated with a new problem of ballistic impact, maximizing the quantity of trials available and simplifying their architecture. To achieve this goal, this paper proposes a tuning performance process based on four stages. These stages include existing statistical techniques, a combination of proposals to improve the performance and analyze the influence of each variable. To measure the quality of the different networks, two criteria based on information theory have been incorporated to reflect the fit of the data with respect to their complexity. The results obtained show that the application of an integrated tuning process in this domain permits improvement in the performance and efficiency of a neural network in comparison with different machine learning alternatives  相似文献   

20.
Early detection of unnatural control chart patterns (CCP) is desirable for any industrial process. Most of recent CCP recognition works are on statistical feature extraction and artificial neural network (ANN)-based recognizers. In this paper, a two-stage hybrid detection system has been proposed using support vector machine (SVM) with self-organized maps. Direct Cosine transform of the CCP data is taken as input. Simulation results show significant improvement over conventional recognizers, with reduced detection window length. An analogous recognition system consisting of statistical feature vector input to the SVM classifier is further developed for comparison.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号