共查询到20条相似文献,搜索用时 15 毫秒
1.
Magne Jørgensen Author Vitae Torleif Halkjelsvik Author Vitae 《Journal of Systems and Software》2010,83(1):29-36
In this paper we study the effects of a change from the traditional request “How much effort is required to complete X?” to the alternative “How much can be completed in Y work-hours?”. Studies 1 and 2 report that software professionals receiving the alternative format provided much lower, and presumably more optimistic, effort estimates of the same software development work than those receiving the traditional format. Studies 3 and 4 suggest that the effect belongs to the family of anchoring effects. An implication of our results is that project managers and clients should avoid the alternative estimation request format. 相似文献
2.
Vahid Khatibi Bardsiri Dayang Norhayati Abang Jawawi Amid Khatibi Bardsiri Elham Khatibi 《Engineering Applications of Artificial Intelligence》2013,26(10):2624-2640
Accurate estimation of software development effort is strongly associated with the success or failure of software projects. The clear lack of convincing accuracy and flexibility in this area has attracted the attention of researchers over the past few years. Despite improvements achieved in effort estimating, there is no strong agreement as to which individual model is the best. Recent studies have found that an accurate estimation of development effort in software projects is unreachable in global space, meaning that proposing a high performance estimation model for use in different types of software projects is likely impossible. In this paper, a localized multi-estimator model, called LMES, is proposed in which software projects are classified based on underlying attributes. Different clusters of projects are then locally investigated so that the most accurate estimators are selected for each cluster. Unlike prior models, LMES does not rely on only one individual estimator in a cluster of projects. Rather, an exhaustive investigation is conducted to find the best combination of estimators to assign to each cluster. The investigation domain includes 10 estimators combined using four combination methods, which results in 4017 different combinations. ISBSG, Maxwell and COCOMO datasets are utilized for evaluation purposes, which include a total of 573 real software projects. The promising results show that the estimate accuracy is improved through localization of estimation process and allocation of appropriate estimators. Besides increased accuracy, the significant contribution of LMES is its adaptability and flexibility to deal with the complexity and uncertainty that exist in the field of software development effort estimation. 相似文献
3.
《Information and Software Technology》2014,56(9):1063-1075
ContextMost research in software effort estimation has not considered chronology when selecting projects for training and testing sets. A chronological split represents the use of a projects starting and completion dates, such that any model that estimates effort for a new project p only uses as training data projects that were completed prior to p’s start. Four recent studies investigated the use of chronological splits, using moving windows wherein only the most recent projects completed prior to a projects starting date were used as training data. The first three studies (S1–S3) found some evidence in favor of using windows; they all defined window sizes as being fixed numbers of recent projects. In practice, we suggest that estimators think in terms of elapsed time rather than the size of the data set, when deciding which projects to include in a training set. In the fourth study (S4) we showed that the use of windows based on duration can also improve estimation accuracy.ObjectiveThis papers contribution is to extend S4 using an additional dataset, and to also investigate the effect on accuracy when using moving windows of various durations.MethodStepwise multivariate regression was used to build prediction models, using all available training data, and also using windows of various durations to select training data. Accuracy was compared based on absolute residuals and MREs; the Wilcoxon test was used to check statistical significances between results. Accuracy was also compared against estimates derived from windows containing fixed numbers of projects.ResultsNeither fixed size nor fixed duration windows provided superior estimation accuracy in the new data set.ConclusionsContrary to intuition, our results suggest that it is not always beneficial to exclude old data when estimating effort for new projects. When windows are helpful, windows based on duration are effective. 相似文献
4.
用于软件开发工作量估算的IOP模型 总被引:2,自引:0,他引:2
软件开发工作量估算可以为多项与组织决策和项目管理相关的任务提供有效的支持.根据工作量估算的不同目标,通过对COCOMO Ⅱ成本驱动因子进行扩充和对国内外最新软件项目数据进行回归分析,建立了一个用于工作量估算的IOP模型.该模型采用统一框架,分别从行业水平、组织水平和项目特征3个层次实现基于规模的软件开发工作量估算,以满足针对软件行业、软件组织和特定软件项目的不同的估算目标,例如项目招标、软件组织不同项目的管理和具体软件项目的管理等.最后,给出了IOP模型应用的若干实例. 相似文献
5.
Systematic literature review of machine learning based software development effort estimation models
Jianfeng Wen Shixian LiZhiyong Lin Yong HuChangqin Huang 《Information and Software Technology》2012,54(1):41-59
Context
Software development effort estimation (SDEE) is the process of predicting the effort required to develop a software system. In order to improve estimation accuracy, many researchers have proposed machine learning (ML) based SDEE models (ML models) since 1990s. However, there has been no attempt to analyze the empirical evidence on ML models in a systematic way.Objective
This research aims to systematically analyze ML models from four aspects: type of ML technique, estimation accuracy, model comparison, and estimation context.Method
We performed a systematic literature review of empirical studies on ML model published in the last two decades (1991-2010).Results
We have identified 84 primary studies relevant to the objective of this research. After investigating these studies, we found that eight types of ML techniques have been employed in SDEE models. Overall speaking, the estimation accuracy of these ML models is close to the acceptable level and is better than that of non-ML models. Furthermore, different ML models have different strengths and weaknesses and thus favor different estimation contexts.Conclusion
ML models are promising in the field of SDEE. However, the application of ML models in industry is still limited, so that more effort and incentives are needed to facilitate the application of ML models. To this end, based on the findings of this review, we provide recommendations for researchers as well as guidelines for practitioners. 相似文献6.
The quick delivery of a functionally truncated product is one of the most common results in iterative development, and has become the predominant development approach. One of its drawbacks is the appearance of incomplete artifacts between iterations. Consequently, well-known size-estimation methods can not be used in iterative development. This paper addresses the problem of size estimation in iterative development. We present a novel approach that enables early size estimation using Unified Modeling Language (UML) artifacts. The approach incorporates self-improvement steps that increase the estimation accuracy in subsequent iterations. The demonstration of its applicability and research results are also presented. The results anticipate the possibility of a significant improvement in size and effort estimates by applying the approach presented here. 相似文献
7.
Estimation by analogy (EBA) predicts effort for a new project by aggregating effort information of similar projects from a
given historical data set. Existing research results have shown that a careful selection and weighting of attributes may improve
the performance of the estimation methods. This paper continues along that research line and considers weighting of attributes
in order to improve the estimation accuracy. More specifically, the impact of weighting (and selection) of attributes is studied
as extensions to our former EBA method AQUA, which has shown promising results and also allows estimation in the case of data
sets that have non-quantitative attributes and missing values. The new resulting method is called AQUA+. For attribute weighting, a qualitative analysis pre-step using rough set analysis (RSA) is performed. RSA is a proven machine
learning technique for classification of objects. We exploit the RSA results in different ways and define four heuristics
for attribute weighting. AQUA+ was evaluated in two ways: (1) comparison between AQUA+ and AQUA, along with the comparative analysis between the proposed four heuristics for AQUA+, (2) comparison of AQUA+ with other EBA methods. The main evaluation results are: (1) better estimation accuracy was obtained by AQUA+ compared to AQUA over all six data sets; and (2) AQUA+ obtained better results than, or very close to that of other EBA methods for the three data sets applied to all the EBA methods.
In conclusion, the proposed attribute weighing method using RSA can improve the estimation accuracy of EBA method AQUA+ according to the empirical studies over six data sets. Testing more data sets is necessary to get results that are more statistical
significant.
相似文献
Guenther RuheEmail: |
8.
The ability to accurately and consistently estimate software development efforts is required by the project managers in planning
and conducting software development activities. Since software effort drivers are vague and uncertain, software effort estimates,
especially in the early stages of the development life cycle, are prone to a certain degree of estimation errors. A software
effort estimation model which adopts a fuzzy inference method provides a solution to fit the uncertain and vague properties
of software effort drivers. The present paper proposes a fuzzy neural network (FNN) approach for embedding artificial neural
network into fuzzy inference processes in order to derive the software effort estimates. Artificial neural network is utilized
to determine the significant fuzzy rules in fuzzy inference processes. We demonstrated our approach by using the 63 historical
project data in the well-known COCOMO model. Empirical results showed that applying FNN for software effort estimates resulted
in slightly smaller mean magnitude of relative error (MMRE) and probability of a project having a relative error of less than or equal to 0.25 (Pred(0.25)) as compared with the results obtained by just using artificial neural network and the original model. The proposed
model can also provide objective fuzzy effort estimation rule sets by adopting the learning mechanism of the artificial neural
network. 相似文献
9.
As software becomes more complex and its scope dramatically increases, the importance of research on developing methods for estimating software development efforts has perpetually increased. Such accurate estimation has a prominent impact on the success of projects. Out of the numerous methods for estimating software development efforts that have been proposed, line of code (LOC)-based constructive cost model (COCOMO), function point-based regression model (FP), neural network model (NN), and case-based reasoning (CBR) are among the most popular models. Recent research has tended to focus on the use of function points (FPs) in estimating the software development efforts, however, a precise estimation should not only consider the FPs, which represent the size of the software, but should also include various elements of the development environment for its estimation. Therefore, this study is designed to analyze the FPs and the development environments of recent software development cases. The primary purpose of this study is to propose a precise method of estimation that takes into account and places emphasis on the various software development elements. This research proposes and evaluates a neural network-based software development estimation model. 相似文献
10.
This paper describes an empirical study undertaken to investigate the quantitative aspects of the phenomenon of requirements elaboration which deals with transformation of high-level goals into low-level requirements. Prior knowledge of the magnitude of requirements elaboration is instrumental in developing early estimates of a project’s cost and schedule. This study examines the data on two different types of goals and requirements - capability and level of service (LOS) - of 20 real-client, graduate-student, team projects done at USC. Metrics for data collection and analyses are described along with the utility of results they produce. Besides revealing a marked difference between the elaboration of capability goals and the elaboration of LOS goals, these results provide some initial relationships between the nature of projects and their ratios of elaboration of capability goals into capability or functional requirements. 相似文献
11.
AME Cuelenaere MJIM van Genuchten FJ Heemstra 《Information and Software Technology》1987,29(10):558-567
Calibration, has been found to be difficult in practice. Wide experience in using the estimation model is necessary; experience which the beginner naturally lacks. This paper indicates why it is important to calibrate a model and how the inexperienced user can be helped by an expert system. In addition, the development of, and experience with, the prototype of an expert system are described. The system dealt with here is intended for the calibration of the PRICE SP estimation. 相似文献
12.
Recently, we developed a technique that allows semi-automatic estimation of anthropometry and pose from a single image. However, estimation was limited to a class of images for which an adequate number of human body segments were almost parallel to the image plane. In this paper, we present a generalization of that estimation algorithm that exploits pairwise geometric relationships of body segments to allow estimation from a broader class of images. In addition, we refine our search space by constructing a fully populated discrete hyper-ellipsoid of stick human body models in order to capture the variance of the statistical anthropometric information. As a result, a better initial estimate can be computed by our algorithm and thus the number of iterations needed during minimization are reduced tenfold. We present our results over a variety of images to demonstrate the broad coverage of our algorithm.Published online: 1 September 2003 相似文献
13.
This paper suggests that a software process can be viewed as an instance of a business process. Therefore software process improvement might be achieved by applying the concepts of Business Process Re-engineering (BPR). BPR is introduced and the recent work of Jacobson, using object-oriented concepts to construct a BPR framework, is described. The paper critiques Jacobson's approach as being essentially reductionist, and presents an alternative approach, State-Behaviour Modelling (SBM), that utilizes systems principles in the analysis of problem situations, while generating object models. The application of SMB to model and improve a component of a software development process, is presented. 相似文献
14.
Rubén González Crespo Roberto Ferro Escobar Luis Joyanes Aguilar Sandra Velazco Andrés G. Castillo Sanz 《Expert systems with applications》2013,40(18):7381-7390
This paper describes the implementation of a virtual World based in GNU OpenSimulator. This program offers a great variety of Web 3.0 ways of work, as it makes possible to visit web sites using avatars created for that purpose. The Universities should be familiar with the creation of new metaverses. That is the reason why a new basic methodology for the creation of a course on expert systems within a metaverse in a virtual campus for e-learning. Besides the creation of a repository or island, it is necessary to make measurements of the performance of the server dedicated to host the system when the number of users of the application grows. In order to forecast the behavior of such servers, ARIMA based time series are used. The auto-correlogrames obtained are analyzed to formulate a statistical model, as close to reality as possible. 相似文献
15.
Christos Kouroupetroglou Michail Salampasis Athanasios Manitsaris 《Universal Access in the Information Society》2007,6(3):273-283
This paper presents a “Semantic Web application framework” which allows different applications to be designed and developed for improving the accessibility of the World Wide Web (WWW).
The framework promotes the idea of creating a community of people federating into groups (ontology creators, annotators, user-agent
developers, end-users) each playing a specific role, without the coordination of any central authority. The use of a specialised
voice web browser for blind people, called SeEBrowser, is presented and discussed as an example of an accessibility tool developed
based on the framework. SeEBrowser utilises annotations of web pages and provides browsing shortcuts. Browsing shortcuts are
mechanisms, which facilitate blind people in moving efficiently through various elements of a web page (e.g. functional elements
such as forms, navigational aids etc.) during the information-seeking process, hence operating effectively as a vital counterbalance
to low accessibility. Finally, an experimental user study is presented and discussed which evaluates SeEBrowser with and without
the use of browsing shortcuts. 相似文献
16.
Up to now, the assessment of work-effort in software engineering is based on statistical methods. Among the best known are COCOMO (Boehm [2]) or SPQR (Jones [6]). Nevertheless it is generally recognized that many qualitative factors enter into the cost of development, such as effectiveness of the team, user's motivation, and accuracy of the specifications. We have designed a Decision Support System (DSS) for estimating the work-effort, in which the processing of the qualitative data is made by an expert system while a function points analysis provides the theoretical work-effort according to the type of software and the past experience. The evaluation is performed at two levels: global and detailed. The global evaluation is made at the beginning of the development according to the data that are, at this moment, available. The detailed evaluation takes place when the design of the software becomes more precise. The software manager can follow the evolution of the changes at the detailed level during the development.In software development, project leaders mostly reason by using their past experience. It therefore follows that a DSS must contain a learning process. We have accordingly designed our system to record the data of the completed developments. These data serve for the new evaluations. At the end of each project, the learning module examines to what extent the already-recorded information must be updated. Thus our system combines statistic data and knowledge-based reasonings. 相似文献
17.
In ‘contextual learning theory’ three types of contextual conditions (differentiation of learning procedures and materials, integrated ICT support, and improvement of development and learning progress) are related to four aspects of the learning process (diagnostic, instructional, managerial, and systemic aspects). The resulting structure consists of 15 guidelines which are expected to improve instruction and learning across different situations. The present study was conducted to give concrete form to two general guidelines with respect to differentiation and five guidelines with respect to integrated ICT support. The products were a ‘pedagogical-didactic kernel structure’ and a general software prototype. In collaboration with three preschool teachers in The Netherlands, both products were used to give concrete form to a first guideline on improvement of development and learning progress in practice. This concerned an intake procedure on the estimation and use of children’s entry characteristics by parents and preschool teacher. Information is given about improvement experiences in early educational practice. Further research and development steps are discussed. 相似文献
18.
The non-parametric k-nearest neighbour (k-NN) multi-source estimation method is commonly employed in forest inventories that use satellite images and field data. The method presumes the selection of a few estimation parameters. An important decision is the choice of the pixel-dependent geographical area from which the nearest field plots in the spectral space for each pixel are selected, the problem being that one spectral vector may correspond to several different ground data vectors. The weighting of different spectral components is an obvious problem when defining the distance metric in the spectral space.The paper presents a new method. The first innovation is that the large-scale variation of forest variables is used as ancillary data that are added to the variables of the multi-source k-NN estimation. These data are assigned weights in a way similar to the spectral information of satellite images when defining the applied distance metric. The second innovation is that “optimal” weights for spectral data, as well as ancillary data, are computed by means of a genetic algorithm. Tests with practical forest inventory data show that the method performs noticeably better than other applications of k-NN estimation methods in forest inventories, and that the problem of biases in the species volume predictions can for example, almost completely be overcome with this new approach. 相似文献
19.
A method is proposed for quantifying differences between multichannel EEG coherence networks represented by functional unit (FU) maps. The approach is based on inexact graph matching for attributed relational graphs and graph averaging, adapted to FU-maps. The mean of a set of input FU-maps is defined in such a way that it not only represents the mean group coherence during a certain task or condition but also to some extent displays individual variations in brain activity. The definition of a mean FU-map relies on a graph dissimilarity measure which takes into account both node positions and node or edge attributes. A visualization of the mean FU-map is used with a visual representation of the frequency of occurrence of nodes and edges in the input FUs. This makes it possible to investigate which brain regions are more commonly involved in a certain task, by analysing the occurrence of a FU of the mean graph in the input FUs. Furthermore, our method gives the possibility to quantitatively compare individual FU-maps by computing their distance to the mean FU-map. The method is applied to the analysis of EEG coherence networks in two case studies, one on mental fatigue and one on patients with corticobasal ganglionic degeneration (CBGD). The method is proposed as a preliminary step towards a complete quantitative comparison, and the real benefit of its application is still to be proven. 相似文献