首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A critical issue in software project management is the accurate estimation of size, effort, resources, cost, and time spent in the development process. Underestimates may lead to time pressures that may compromise full functional development and the software testing process. Likewise, overestimates can result in noncompetitive budgets. In this paper, artificial neural network and stepwise regression based predictive models are investigated, aiming at offering alternative methods for those who do not believe in estimation models. The results presented in this paper compare the performance of both methods and indicate that these techniques are competitive with the APF, SLIM, and COCOMO methods.  相似文献   

2.
为解决目前软件行业工作量估计准确率低的问题,提出动态估计软件项目工作量的方法.首先,在项目执行前,针对工作量与规模之间的线性和非线性关系,采用基于规模的工作量估计模型对工作量进行初步估计;其次,在项目执行过程中,依据不断完善的信息对工作量估计进行调整;最后,在项目完成后,对工作量估计方法与估计结果进行评价,提出利用工作量与进度之间的幂指函数关系作为估计结果的验证指南,以提高估计结果的准确性.该方法将工作量估计作为贯穿项目始终的任务,并且将其作为一个动态的过程加以管理,为企业提高估计精度提供了一种简单、有效的方法,实现了估计方法的持续改进.  相似文献   

3.
In this paper we describe a process for evaluating the architectures of large, complex software-intensive systems. This process involves both social and technical aspects. The social aspects deal with planning and running an evaluation. The technical aspects concentrate on the representation of architectural information, standard questions, analyses, and quality attribute characterizations. We then take the generic notion of architectural evaluation, and discuss some techniques for applying this technique to the domain of real-time systems. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

4.
During discussions with a group of U.S. software developers we explored the effect of schedule estimation practices and their implications for software project success. Our objective is not only to explore the direct effects of cost and schedule estimation on the perceived success or failure of a software development project, but also to quantitatively examine a host of factors surrounding the estimation issue that may impinge on project outcomes. We later asked our initial group of practitioners to respond to a questionnaire that covered some important cost and schedule estimation topics. Then, in order to determine if the results are generalizable, two other groups from the US and Australia, completed the questionnaire. Based on these convenience samples, we conducted exploratory statistical analyses to identify determinants of project success and used logistic regression to predict project success for the entire sample, as well as for each of the groups separately. From the developer point of view, our overall results suggest that success is more likely if the project manager is involved in schedule negotiations, adequate requirements information is available when the estimates are made, initial effort estimates are good, take staff leave into account, and staff are not added late to meet an aggressive schedule. For these organizations we found that developer input to the estimates did not improve the chances of project success or improve the estimates. We then used the logistic regression results from each single group to predict project success for the other two remaining groups combined. The results show that there is a reasonable degree of generalizability among the different groups.  相似文献   

5.
ContextSoftware has been developed since the 1960s but the success rate of software development projects is still low. During the development of software, the probability of success is affected by various practices or aspects. To date, it is not clear which of these aspects are more important in influencing project outcome.ObjectiveIn this research, we identify aspects which could influence project success, build prediction models based on the aspects using data collected from multiple companies, and then test their performance on data from a single organization.MethodA survey-based empirical investigation was used to examine variables and factors that contribute to project outcome. Variables that were highly correlated to project success were selected and the set of variables was reduced to three factors by using principal components analysis. A logistic regression model was built for both the set of variables and the set of factors, using heterogeneous data collected from two different countries and a variety of organizations. We tested these models by using a homogeneous hold-out dataset from one organization. We used the receiver operating characteristic (ROC) analysis to compare the performance of the variable and factor-based models when applied to the homogeneous dataset.ResultsWe found that using raw variables or factors in the logistic regression models did not make any significant difference in predictive capability. The prediction accuracy of these models is more balanced when the cut-off is set to the ratio of success to failures in the datasets used to build the models. We found that the raw variable and factor-based models predict significantly better than random chance.ConclusionWe conclude that an organization wishing to estimate whether a project will succeed or fail may use a model created from heterogeneous data derived from multiple organizations.  相似文献   

6.
Several tools have been developed for the estimation of software reliability. However, they are highly specialized in the approaches they implement and the particular phase of the software life-cycle in which they are applicable. There is an increasing need for a tool that can be used to track the quality of a software product during the software life-cycle, right from the architectural phase all the way up to the operational phase of the software. Also the conventional techniques for software reliability evaluation, which treat the software as a monolithic entity, are inadequate to assess the reliability of heterogeneous systems, which consist of a large number of globally distributed components. Architecture-based approaches are essential to assess the reliability and performance of such systems. This paper presents the high-level design of a software reliability estimation and prediction tool (SREPT), that offers a unified framework consisting of techniques (including the architecture-based approach) to assist in the evaluation of software reliability during all phases of the software life-cycle.  相似文献   

7.
In an interesting paper, L.A. Laranjeira (see ibid., vol.6, no.5, p.510-22, 1990) describes a first attempt to understand cost estimation within an object oriented environment. While the presented approach presents many interesting and useful ideas, it is, unfortunately, marred by several mathematical errors pertaining to statistics, exponential functions, and the nature of discrete vs. continuous data. These are discussed here and more appropriate correct procedures outlined  相似文献   

8.
To make reasonable estimates of resources, costs, and schedules, software project managers need to be provided with models that furnish the essential framework for software project planning and control by supplying important management numbers concerning the state and parameters of the project that are critical for resource allocation. Understanding that software development is not a mechanistic process brings about the realization that parameters that characterize the development of software possess an inherent fuzziness, thus providing the rationale for the development of realistic models based on fuzzy set or neural theories.Fuzzy and neural approaches offer a key advantage over traditional modeling approaches in that they aremodel-free estimators. This article opens up the possibility of applying fuzzy estimation theory and neural networks for the purpose of software engineering project management and control, using Putnam's manpower buildup index (MBI) estimation model as an example. It is shown that the MBI selection process can be based upon 64 different fuzzy associative memory (FAM) rules. The same rules are used to generate 64 training patterns for a feedforward neural network. The fuzzy associative memory and neural network approaches are compared qualitatively through estimation surfaces. The FAM estimation surfaces are stepped, whereas those from the neural system are smooth. Also, the FAM system sets up much faster than the neural system. FAM rules obtained from logical antecedent-consequent pairs are maintained distinct, giving the user the ability to determine which FAM rule contributed how much membership activation to a concluded output.  相似文献   

9.
Reliability is a key driver of safety-critical systems such as health-care systems and traffic controllers. It is also one of the most important quality attributes of the systems embedded into our surroundings, e.g. sensor networks that produce information for business processes. Therefore, the design decisions that have a great impact on the reliability of a software system, i.e. architecture and components, need to be thoroughly evaluated. This paper addresses software reliability evaluation during the design and implementation phases; it provides a coherent approach by combining both predicted and measured reliability values with heuristic estimates in order to facilitate a smooth reliability evaluation process. The approach contributes by integrating the component-level reliability evaluation activities (i.e. the heuristic reliability estimation, model-based reliability prediction and model-based reliability measuring of components) and the system-level reliability prediction activity to support the incremental and iterative development of reliable component-based software systems. The use of the developed reliability evaluation approach with the supporting tool chain is illustrated by a case study. The paper concludes with a summary of lessons learnt from the case studies.  相似文献   

10.
Accurate estimation of software project effort is crucial for successful management and control of a software project. Recently, multiple additive regression trees (MART) has been proposed as a novel advance in data mining that extends and improves the classification and regression trees (CART) model using stochastic gradient boosting. This paper empirically evaluates the potential of MART as a novel software effort estimation model when compared with recently published models, in terms of accuracy. The comparison is based on a well-known and respected NASA software project dataset. The results indicate that improved estimation accuracy of software project effort has been achieved using MART when compared with linear regression, radial basis function neural networks, and support vector regression models.  相似文献   

11.
During software system evolution, software architects intuitively trade off the different architecture alternatives for their extra-functional properties, such as performance, maintainability, reliability, security, and usability. Researchers have proposed numerous model-driven prediction methods based on queuing networks or Petri nets, which claim to be more cost-effective and less error-prone than current practice. Practitioners are reluctant to apply these methods because of the unknown prediction accuracy and work effort. We have applied a novel model-driven prediction method called Q-ImPrESS on a large-scale process control system from ABB consisting of several million lines of code. This paper reports on the achieved performance prediction accuracy and reliability prediction sensitivity analyses as well as the effort in person hours for achieving these results.  相似文献   

12.

Context

Multiagent systems (MAS) allow complex systems to be developed in which autonomous and heterogeneous entities interact. Currently, there are a great number of methods and frameworks for developing MAS. The selection of one or another development environment is a crucial part of the development process. Therefore, the evaluation and comparison of MAS software engineering techniques is necessary in order to make the selection of the development environment easier.

Objective

The main goal of this paper is to define an evaluation framework that will help in facilitating, standardizing, and simplifying the evaluation, analysis, and comparison of MAS development environments. Moreover, the final objective of the proposed tool is to provide a repository of the most commonly used MAS software engineering methods and tools.

Method

The proposed framework analyzes methods and tools through a set of criteria that are related to both system engineering dimensions and MAS features. Also, the support for developing organizational and service-oriented MAS is studied. This framework is implemented as an online application to improve its accessibility.

Results

In this paper, we present Masev, which is an evaluation framework for MAS software engineering. It allows MAS methods, techniques and environments to be analyzed and compared. A case study of the analysis of four methodologies is presented.

Conclusion

It is concluded that Masev simplifies the evaluation and comparison task and summarizes the most important issues for developing MAS, organizational MAS, and service-oriented MAS. Therefore, it could help developers to select the most appropriate MAS method and tools for developing a specific system, and it could be used for MAS software engineering developers to detect and deficiencies in their methods and tools. Also, developers of new tools can understand this application as a way to publish their tools and demonstrate what their contributions are to the state of the art.  相似文献   

13.
We propose a model for describing and predicting the parallel performance of a broad class of parallel numerical software on distributed memory architectures. The purpose of this model is to allow reliable predictions to be made for the performance of the software on large numbers of processors of a given parallel system, by only benchmarking the code on small numbers of processors. Having described the methods used, and emphasized the simplicity of their implementation, the approach is tested on a range of engineering software applications that are built upon the use of multigrid algorithms. Despite their simplicity, the models are demonstrated to provide both accurate and robust predictions across a range of different parallel architectures, partitioning strategies and multigrid codes. In particular, the effectiveness of the predictive methodology is shown for a practical engineering software implementation of an elastohydrodynamic lubrication solver.  相似文献   

14.
Performance metrics can be predicted with appropriate performance models and evaluation algorithms. The goal of our work is to adapt the Mean-Value Analysis evaluation algorithm to model the behavior of the thread pool. The computation time and the computational complexity of the proposed algorithm have been provided. The limit of the response time and the throughput sequences computed by the novel algorithm has been determined. It has been shown that the proposed algorithm can be applied to performance prediction of web-based software systems in ASP.NET environment. The proposed algorithm has been validated and the correctness of performance prediction with the novel algorithm has been verified with performance measurements. Error analysis has been performed to verify the correctness of performance prediction.  相似文献   

15.
By employing the Orthogonal Defect Classification scheme, the authors are able to support management with a firm handle on technical decision making. Through the extensive capture and analysis of defect semantics, one can obtain information on project management, test effectiveness, reliability, quality, and customer usage. The article describes three real-life case studies, and demonstrates the applicability of their techniques,  相似文献   

16.
Pfleeger  S.L. Fenton  N. Page  S. 《Computer》1994,27(9):71-79
The authors report on the results of the Smartie project (Standards and Methods Assessment Using Rigorous Techniques in Industrial Environments), a collaborative effort to propose a widely applicable procedure for the objective assessment of standards used in software development. We hope that, for a given environment and application area, Smartie will enable the identification of standards whose use is most likely to lead to improvements in some aspect of software development processes and products. In this article, we describe how we verified the practicality of the Smartie framework by testing it with corporate partners  相似文献   

17.
Evaluating software complexity measures   总被引:1,自引:0,他引:1  
A set of properties of syntactic software complexity measures is proposed to serve as a basis for the evaluation of such measures. Four known complexity measures are evaluated and compared using these criteria. This formalized evaluation clarifies the strengths and weaknesses of the examined complexity measures, which include the statement count, cyclomatic number, effort measure, and data flow complexity measures. None of these measures possesses all nine properties, and several are found to fail to possess particularly fundamental properties; this failure calls into question their usefulness in measuring synthetic complexity  相似文献   

18.
With at least 80 process simulation software packages on the market, a method of evaluation for prospective users was long overdue. Expert opinion on the form of an evaluation test was collated and extended into the form of a test flowsheet. Details of the flowsheet are given with a step-by-step guide to its use.  相似文献   

19.
《Computers & Education》1988,12(1):133-139
Broadcasting organisations have considerable expertise in the evaluation of television and radio programmes, and U.K. broadcasters are held in particularly high esteem in this respect. They are, however, on rather less secure ground when it comes to multimedia resources which attempt to integrate broadcast and computer-based materials.The project described here was sponsored by the Independent Broadcasting Authority (IBA) and was carried out in the West Midlands region of England during the summer of 1986. It was undertaken to develop guidelines for the evaluation of broadcast-linked software and to provide insights for the creators of new multimedia materials. The research focussed on two primary education packages, in the areas of mathematics and history respectively, produced by Independent Television Companies.Each package (television programmes and associated courseware) was evaluated in collaboration with a team of teachers from six different schools. The evaluation process lasted a full school term (3 months). Some details of the process are discussed. It was concluded that there was strong potential support for multimedia educational packages incorporating broadcast television, computer software and other media. This conclusion is related to the growing field of interactive videodisc.  相似文献   

20.
A number of software cost estimation methods have been presented in literature over the past decades. Analogy based estimation (ABE), which is essentially a case based reasoning (CBR) approach, is one of the most popular techniques. In order to improve the performance of ABE, many previous studies proposed effective approaches to optimize the weights of the project features (feature weighting) in its similarity function. However, ABE is still criticized for the low prediction accuracy, the large memory requirement, and the expensive computation cost. To alleviate these drawbacks, in this paper we propose the project selection technique for ABE (PSABE) which reduces the whole project base into a small subset that consist only of representative projects. Moreover, PSABE is combined with the feature weighting to form FWPSABE for a further improvement of ABE. The proposed methods are validated on four datasets (two real-world sets and two artificial sets) and compared with conventional ABE, feature weighted ABE (FWABE), and machine learning methods. The promising results indicate that project selection technique could significantly improve analogy based models for software cost estimation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号