首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 546 毫秒
1.
2.
Web development is moving towards model-driven processes whose goal is the development of Web applications at a higher level of abstraction based on models and model transformations. This brings new opportunities to the Web project manager to make early estimates of the size and the effort required to produce Web applications based on their conceptual models. In the last few years, several studies for size and effort estimation have been performed. However, there are no studies regarding effort estimation in model-driven Web development. In this paper, we present the validation of a model-based size measure (OO-HFP) for Web effort estimation in the context of a model-driven Web development method. The validation is performed by comparing the prediction accuracy that OO-HFP provides with the accuracy provided by the standard function point analysis (FPA) method. The results of the study (using industrial data gathered from 31 Web projects) show that the effort estimates obtained for projects that are sized using OO-HFP are more accurate than the effort estimates obtained using the standard FPA method. This suggests that by following a model-driven development approach, the size measure obtained at the conceptual model of a Web application can be considered a suitable predictor of effort.  相似文献   

3.
Systems of systems exhibit characteristics that pose difficulty in modelling and predicting their overall performance capabilities, including the presence of operational independence, emergent behaviour, and evolutionary development. When considering systems of systems within the autonomous defence systems context, these aspects become increasingly critical, as constraints on the performance of the final system are typically driven by hard constraints on space, weight and power. System execution modelling languages and tools permit early prediction of the performance of model-driven systems; however, the focus to date has been on understanding the performance of a model rather than determining whether it meets performance requirements, and only subsequently carrying out analysis to reveal the causes of any requirement violations. Moreover, such an analysis is even more difficult when applied to several systems cooperating to achieve a common goal—a system of systems. In this article, we propose an integrated approach to performance prediction of model-driven real-time embedded defence systems and systems of systems. Our architectural prototyping system supports a scenario-driven experimental platform for evaluating model suitability within a set of deployment and real-time performance constraints. We present an overview of our performance prediction system, demonstrating the integration of modelling, execution and performance analysis, and discuss a case study to illustrate our approach.  相似文献   

4.
Prediction of software development effort is the key task for the effective management of any software industry. The accuracy and reliability of prediction mechanisms is also important. Neural network based models are competitive to traditional regression and statistical models for software effort estimation. This comprehensive article, covers various neural network based models for software estimation as presented by various researchers. The review of twenty-one articles covers a range of features used for effort prediction. This survey aims to support the research for effort prediction and to emphasize capabilities of neural network based model in effort prediction.  相似文献   

5.
在芯片设计领域,采用模型驱动的FPGA设计方法是目前较为安全可靠的一种方法.但是,基于模型驱动的FPGA设计需要证明FPGA设计模型和生成Verilog/VHDL代码的一致性;同时,芯片设计的正确性、可靠性和安全性也至关重要.目前,多采用仿真方法对模型和代码的一致性进行验证,很难保证设计的可靠性和安全性,并存在验证效率...  相似文献   

6.
基于时间序列的软件可靠性预测模型研究   总被引:1,自引:0,他引:1  
将软件可靠性测试阶段获得的失效数据作为时间序列进行多尺度分解,对分解到不同尺度上的数据分别利用不同的时序预测模型进行分析,得到软件可靠性多尺度预测模型.数据实验表明与单一时序预测模型相比,该模型逼近和预测效果良好,具有较高的预测精度和很好的模型适应性.  相似文献   

7.
Much current software defect prediction work focuses on the number of defects remaining in a software system. In this paper, we present association rule mining based methods to predict defect associations and defect correction effort. This is to help developers detect software defects and assist project managers in allocating testing resources more effectively. We applied the proposed methods to the SEL defect data consisting of more than 200 projects over more than 15 years. The results show that, for defect association prediction, the accuracy is very high and the false-negative rate is very low. Likewise, for the defect correction effort prediction, the accuracy for both defect isolation effort prediction and defect correction effort prediction are also high. We compared the defect correction effort prediction method with other types of methods - PART, C4.5, and Naive Bayes - and show that accuracy has been improved by at least 23 percent. We also evaluated the impact of support and confidence levels on prediction accuracy, false-negative rate, false-positive rate, and the number of rules. We found that higher support and confidence levels may not result in higher prediction accuracy, and a sufficient number of rules is a precondition for high prediction accuracy.  相似文献   

8.
现代社会越来越依赖于稳定的软件系统的运行;除此之外,尽管这些软件系统的规模和复杂度越来越高,但是必须达到很高的系统稳定性,且必须降低进一步开发和维护所需要的资源。为了减轻软件开发的挑战性,需要一种有效的方法来引导软件系统中资源的分配。提出了一种根据每一组件对系统稳定性的贡献来确定每一组件所需分配的工作量的方法,从而在工作量最小的条件下达到一定的系统稳定性。假设组件对系统稳定性的影响主要包含两个方面的因素:系统结构和组件的可靠性。这种描述方法被称为基于结构的测试量分配优化方法,因为其在测试量分配过程中考虑了系统的结构。验证结果表明这种基于结构的优化方法比其他测试量分配策略更有效。  相似文献   

9.
Reliable effort prediction remains an ongoing challenge to software engineers. Traditional approaches to effort prediction such as the use of models derived from historical data, or the use of expert opinion are plagued with issues pertaining to their effectiveness and robustness. These issues are more pronounced when the effort prediction is used during the early phases of the software development lifecycle. Recent works have demonstrated promising results obtained with the use of fuzzy logic. Fuzzy logic based effort prediction systems can deal better with imprecision, which characterizes the early phases of most software development projects, for example requirements development, whose effort predictors along with their relationships to effort are characterized as being even more imprecise and uncertain than those of later development phases, for example design. Fuzzy logic based prediction systems could produce further better estimates provided that various parameters and factors pertaining to fuzzy logic are carefully set. In this paper, we present an empirical study, which shows that the prediction accuracy of a fuzzy logic based effort prediction system is highly dependent on the system architecture, the corresponding parameters, and the training algorithms.  相似文献   

10.
With the increasing maturity of model-driven tools and methods, new model-based analysis methods are developed to support specific stakeholder concerns during software lifecycle. This multiplication of models and their related analysis tools calls for solution addressing the integration of MOF-based analysis methods. Current research works on integration of analysis methods have already addressed the extraction of the needed input data as well as the control and the integration of the tools supporting the analysis execution. However, little attention has been paid to the integration of analysis results back into initial model. We propose an MOF-based framework enabling the integration of analysis results that a) defines a meta-model capturing the integration requirements, b) provides an MOF meta-model extension mechanism with support for upward compatibility; and c) automatically generates a model transformation for model integration. We illustrate the use of our framework by integrating a reliability analysis methods and a fault tolerant reconfiguration method on the ABC/ADL Software Architecture. We applied the resulting analysis composition onto the ECPerf JEE system.  相似文献   

11.
Not a day goes by that the general public does not come into contact with a real-time system. As their numbers and importance grow, so do the stakes for software developers. A failure in a critical application may result in great financial loss-or even loss of life. More effort must be expended to analyze the reliability and safety of such systems. Analysis of hardware components in critical applications has matured over the years and commonly followed techniques have emerged. However, methods and techniques for analyzing the reliability and safety of the software part of critical applications are relatively new and still maturing. Yet the vulnerability of the system to software failures is on the rise and may (and in some cases does) exceed hardware failures. Software is not only becoming more prevalent in real-time systems, it is becoming a larger part of real-time systems, in the sense that the amount of effort expended in designing and implementing the software is a larger proportion of the total expended effort  相似文献   

12.
The performance of modern control methods, such as model predictive control, depends significantly on the accuracy of the system model. In practice, however, stochastic uncertainties are commonly present, resulting from inaccuracies in the modeling or external disturbances, which can have a negative impact on the control performance. This article reviews the literature on methods for predicting probabilistic uncertainties for nonlinear systems. Since a precise prediction of probability density functions comes along with a high computational effort in the nonlinear case, the focus of this article is on approximating methods, which are of particular relevance in control engineering practice. The methods are classified with respect to their approximation type and with respect to the assumptions about the input and output distribution. Furthermore, the application of these prediction methods to stochastic model predictive control is discussed including a literature review for nonlinear systems. Finally, the most important probabilistic prediction methods are evaluated numerically. For this purpose, the estimation accuracies of the methods are investigated first and the performance of a stochastic model predictive controller with different prediction methods is examined subsequently using multiple nonlinear systems, including the dynamics of an autonomous vehicle.  相似文献   

13.
Software reliability is the primary concern of software development organizations, and the exponentially increasing demand for reliable software requires modeling techniques to be developed in the present era. Small unnoticeable drifts in the software can culminate into a disaster. Early removal of these errors helps the organization improve and enhance the software’s reliability and save money, time, and effort. Many soft computing techniques are available to get solutions for critical problems but selecting the appropriate technique is a big challenge. This paper proposed an efficient algorithm that can be used for the prediction of software reliability. The proposed algorithm is implemented using a hybrid approach named Neuro-Fuzzy Inference System and has also been applied to test data. In this work, a comparison among different techniques of soft computing has been performed. After testing and training the real time data with the reliability prediction in terms of mean relative error and mean absolute relative error as 0.0060 and 0.0121, respectively, the claim has been verified. The results claim that the proposed algorithm predicts attractive outcomes in terms of mean absolute relative error plus mean relative error compared to the other existing models that justify the reliability prediction of the proposed model. Thus, this novel technique intends to make this model as simple as possible to improve the software reliability.  相似文献   

14.
Modern manufacturing businesses increasingly engage in servitisation, by offering advanced services along with physical products, and creating “product–service systems”. Information Technology infrastructures, and especially software, are a critical part of modern service provision. However, software development in this context has not been investigated and there are no development methods or tools specifically adapted to the task of creating software for servitised businesses in general, or manufacturing in particular. In this paper, we define the requirements for software engineering in servitised manufacturing. Based on these, we describe a model-driven software engineering workflow for servitised manufacturing, supporting both structural and behavioural modelling of the service system. Furthermore, we elaborate on the architecture of an appropriate model-driven Integrated Development Environment (IDE). The proposed workflow and a prototype implementation of the IDE were evaluated in a set of industrial pilots, demonstrating improved communication and collaboration between participants in the software engineering process.  相似文献   

15.
An empirical study of predicting software faults with case-based reasoning   总被引:1,自引:0,他引:1  
The resources allocated for software quality assurance and improvement have not increased with the ever-increasing need for better software quality. A targeted software quality inspection can detect faulty modules and reduce the number of faults occurring during operations. We present a software fault prediction modeling approach with case-based reasoning (CBR), a part of the computational intelligence field focusing on automated reasoning processes. A CBR system functions as a software fault prediction model by quantifying, for a module under development, the expected number of faults based on similar modules that were previously developed. Such a system is composed of a similarity function, the number of nearest neighbor cases used for fault prediction, and a solution algorithm. The selection of a particular similarity function and solution algorithm may affect the performance accuracy of a CBR-based software fault prediction system. This paper presents an empirical study investigating the effects of using three different similarity functions and two different solution algorithms on the prediction accuracy of our CBR system. The influence of varying the number of nearest neighbor cases on the performance accuracy is also explored. Moreover, the benefits of using metric-selection procedures for our CBR system is also evaluated. Case studies of a large legacy telecommunications system are used for our analysis. It is observed that the CBR system using the Mahalanobis distance similarity function and the inverse distance weighted solution algorithm yielded the best fault prediction. In addition, the CBR models have better performance than models based on multiple linear regression. Taghi M. Khoshgoftaar is a professor of the Department of Computer Science and Engineering, Florida Atlantic University and the Director of the Empirical Software Engineering Laboratory. His research interests are in software engineering, software metrics, software reliability and quality engineering, computational intelligence, computer performance evaluation, data mining, and statistical modeling. He has published more than 200 refereed papers in these areas. He has been a principal investigator and project leader in a number of projects with industry, government, and other research-sponsoring agencies. He is a member of the Association for Computing Machinery, the IEEE Computer Society, and IEEE Reliability Society. He served as the general chair of the 1999 International Symposium on Software Reliability Engineering (ISSRE’99), and the general chair of the 2001 International Conference on Engineering of Computer Based Systems. Also, he has served on technical program committees of various international conferences, symposia, and workshops. He has served as North American editor of the Software Quality Journal, and is on the editorial boards of the journals Empirical Software Engineering, Software Quality, and Fuzzy Systems. Naeem Seliya received the M.S. degree in Computer Science from Florida Atlantic University, Boca Raton, FL, USA, in 2001. He is currently a Ph.D. candidate in the Department of Computer Science and Engineering at Florida Atlantic University. His research interests include software engineering, computational intelligence, data mining, software measurement, software reliability and quality engineering, software architecture, computer data security, and network intrusion detection. He is a student member of the IEEE Computer Society and the Association for Computing Machinery.  相似文献   

16.
In recent years, grey relational analysis (GRA), a similarity-based method, has been proposed and used in many applications. However, we found that most traditional GRA methods only consider nonweighted similarity for predicting software development effort. In fact, nonweighted similarity may cause biased predictions, because each feature of a project may have a different degree of relevance to the development effort. Therefore, this paper proposes six weighted methods, including nonweighted, distance-based, correlative, linear, nonlinear, and maximal weights, to be integrated into GRA for software effort estimation. Numerical examples and sensitivity analyses based on four public datasets are used to show the performance of the proposed methods. The experimental results indicate that the weighted GRA can improve estimation accuracy and reliability from the nonweighted GRA. The results also demonstrate that the weighted GRA performs better than other estimation techniques and published results. In summary, we can conclude that weighted GRA can be a viable and alternative method for predicting software development effort.  相似文献   

17.
The rapid development of technology provides high performance and reliability for the hardware system; based on this, software engineers can focus their developed software on more convenience and ultra-high reliability. To reach this goal, the testing stage of software development life cycle usually takes more time and effort due to the growing complexity of the software. How to build software that can be tested efficiently has become an important topic in addition to enhancing and developing new testing methods. Thus, research on software testability has been conducted and various methods have been developed. In the past, a dynamic technique for estimating program testability was proposed and called propagation, infection and execution (PIE) analysis. Previous research studies have shown that PIE analysis can complement software testing. However, this method requires a lot of computational overhead in estimating the testability of software components. In this article, we propose an extended PIE (EPIE) method to accelerate the conventional PIE analysis, based on generating group testability as a substitute for statement testability. Our proposed method can be systematically separated into three steps: breaking a program into blocks, dividing the blocks into groups and marking target statements. Experiments and evaluations with the Siemens suite, together with cost-effectiveness analysis, clearly show that the number of analysed statements can be effectively decreased, and the calculated values of testability are still acceptable.  相似文献   

18.
High-assurance and complex mission-critical software systems are heavily dependent on reliability of their underlying software applications. An early software fault prediction is a proven technique in achieving high software reliability. Prediction models based on software metrics can predict number of faults in software modules. Timely predictions of such models can be used to direct cost-effective quality enhancement efforts to modules that are likely to have a high number of faults. We evaluate the predictive performance of six commonly used fault prediction techniques: CART-LS (least squares), CART-LAD (least absolute deviation), S-PLUS, multiple linear regression, artificial neural networks, and case-based reasoning. The case study consists of software metrics collected over four releases of a very large telecommunications system. Performance metrics, average absolute and average relative errors, are utilized to gauge the accuracy of different prediction models. Models were built using both, original software metrics (RAW) and their principle components (PCA). Two-way ANOVA randomized-complete block design models with two blocking variables are designed with average absolute and average relative errors as response variables. System release and the model type (RAW or PCA) form the blocking variables and the prediction technique is treated as a factor. Using multiple-pairwise comparisons, the performance order of prediction models is determined. We observe that for both average absolute and average relative errors, the CART-LAD model performs the best while the S-PLUS model is ranked sixth.  相似文献   

19.
Toward trustworthy software systems   总被引:4,自引:0,他引:4  
Hasselbring  W. Reussner  R. 《Computer》2006,39(4):91-92
Organizations such as Microsoft's Trusted Computing Group and Sun Microsystems' Liberty Alliance are currently leading the debate on "trustworthy computing." However, these and other initiatives primarily focus on security, and trustworthiness depends on many other attributes. To address this problem, the University of Oldenburg's TrustSoft Graduate School aims to provide a holistic view of trustworthiness in software - one that considers system construction, evaluation/analysis, and certification - in an interdisciplinary setting. Component technology is the foundation of our research program. The choice of a component architecture greatly influences the resulting software systems' nonfunctional properties. We are developing new methods for the rigorous design of trustworthy software systems with predictable, provable, and ultimately legally certifiable system properties. We are well aware that it is impossible to build completely error-free complex software systems. We therefore complement fault-prevention and fault-removal techniques with fault-tolerance methods that introduce redundancy and diversity into software systems. Quantifiable attributes such as availability, reliability, and performance call for analytical prediction models, which require empirical studies for calibration and validation. To consider the legal aspects of software certification and liability, TrustSoft integrates the disciplines of computer science and computer law.  相似文献   

20.
Traditional approaches for software projects effort prediction such as the use of mathematical formulae derived from historical data, or the use of experts judgments are plagued with issues pertaining to effectiveness and robustness in their results. These issues are more pronounced when these effort prediction approaches are used during the early phases of the software development lifecycle, for example requirements development, whose effort predictors along with their relationships to effort are characterized as being even more imprecise and uncertain than those of later development phases, for example design. Recent works have demonstrated promising results using approaches based on fuzzy logic. Effort prediction systems that use fuzzy logic can deal with imprecision; they, however, can not deal with uncertainty. This paper presents an effort prediction framework that is based on type-2 fuzzy logic to allow handling imprecision and uncertainty inherent in the information available for effort prediction. Evaluation experiments have shown the framework to be promising.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号