首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Lyu  M.R. Nikora  A. 《Software, IEEE》1992,9(4):43-52
A set of linear combination software reliability models that combine the results of single, or component, models is presented. It is shown that, as measured by statistical methods for determining a model's applicability to a set of failure data, a combination model tends to have more accurate short-term and long-term predictions than a component model. These models were evaluated using both historical data sets and data from recent Jet Propulsion Laboratory projects. The computer-aided software reliability estimation (CASRE) tool, which automates many reliability measurement tasks and makes it easier to apply reliability models and to form combination models, is described  相似文献   

2.
There are two main goals in testing software: (1) to achieve adequate quality (debug testing), where the objective is to probe the software for defects so that these can be removed, and (2) to assess existing quality (operational testing), where the objective is to gain confidence that the software is reliable. Debug methods tend to ignore random selection of test data from an operational profile, while for operational methods this selection is all-important. Debug methods are thought to be good at uncovering defects so that these can be repaired, but having done so they do not provide a technically defensible assessment of the reliability that results. On the other hand, operational methods provide accurate assessment, but may not be as useful for achieving reliability. This paper examines the relationship between the two testing goals, using a probabilistic analysis. We define simple models of programs and their testing, and try to answer the question of how to attain program reliability: is it better to test by probing for defects as in debug testing, or to assess reliability directly as in operational testing? Testing methods are compared in a model where program failures are detected and the software changed to eliminate them. The “better” method delivers higher reliability after all test failures have been eliminated. Special cases are exhibited in which each kind of testing is superior. An analysis of the distribution of the delivered reliability indicates that even simple models have unusual statistical properties, suggesting caution in interpreting theoretical comparisons  相似文献   

3.
Two techniques that analyze prediction accuracy and enhance predictive power of a software reliability model are presented. The u-plot technique detects systematic differences between predicted and observed failure behavior, allowing the recalibration of a software reliability model to obtain more accurate predictions. The perpetual likelihood ratio (PLR) technique compares two models' abilities to predict a particular data source so that the one that has been most accurate over a sequence of predictions can be selected. The application of these techniques is illustrated using three sets of real failure data  相似文献   

4.
提出了一种利用软件生命周期过程中影响软件可靠性的软件质量特性进行软件可靠性评测的方法.从软件失效的机理出发,分析并提取了刻画软件可靠性各个方面的因素,同时借助模糊分析及专家系统的理论对软件可靠性因素进行了定量的划分和描述.针对软件可靠性评测的多因素决策问题,提出了一种基于随机森林的软件可靠性评测模型.通过蒙特卡罗模拟仿真建立了各个可靠性因素的概率模型以获得评测样本数据集,并在此数据集上对所提出的评测模型进行了评测和分析.实验结果表明,所提出的方法能够对软件可靠性进行准确的评测,且不依赖于特定的可靠性因素的先验概率,显著提高了软件可靠性的评测性能.同时验证了该方法能够克服小样本集上易出现的过拟合及表现力差的问题,具有较好的稳定性和鲁棒性.  相似文献   

5.
This paper presents modeling frameworks for distributing development effort among software components to facilitate cost-effective progress toward a system reliability goal. Emphasis on components means that the frameworks can be used, for example, in cleanroom processes and to set certification criteria. The approach, based on reliability allocation, uses the operational profile to quantify the usage environment and a utilization matrix to link usage with system structure. Two approaches for reliability and cost planning are introduced: Reliability-Constrained Cost-Minimization (RCCM) and Budget-Constrained Reliability-Maximization (BCRM). Efficient solutions are presented corresponding to three general functions for measuring cost-to-attain failure intensity. One of the functions is shown to be a generalization of the basic COCOMO form. Planning within budget, adaptation for other cost functions and validation issues are also discussed. Analysis capabilities are illustrated using a software system consisting of 26 developed modules and one procured module. The example also illustrates how to specify a reliability certification level, and minimum purchase price, for the procured module  相似文献   

6.
针对现有软件可靠性模型选择方法计算复杂和适用性差的问题,提出基于多准则决策的软件可靠性模型选择方法.定义了指导软件可靠性模型选择的若干准则,包括生命周期阶段准则、模型输入要求准则、模型输出要求准则、模型假设吻合准则和失效数据趋势准则,将这些准则分为确定性准则和不确定性准则,阐述了基于这些准则进行软件可靠性模型选择的算法,并实例验证了该方法的简单可行性.指出将这种方法与现有软件可靠性模型选择方法进行综合运用的研究方向.  相似文献   

7.
In this paper, we propose a new method to estimate the relationship between software reliability and software development cost taking into account the complexity for developing the software system and the size of software intended to develop during the implementation phase of the software development life cycle. On the basis of estimated relationship, a set of empirical data has been used to validate the correctness of the proposed model by comparing the result with the other existing models. The outcome of this work shows that the method proposed here is a relatively straightforward one in formulating the relationship between reliability and cost during implementation phase.  相似文献   

8.
软件复用是提高软件开发效率和改善软件质量的一项重要技术。把领域工程中可复用思想应用到测试用例的复用中,提出了测试用例的复用技术模型。通过实例说明,该模型能提高测试的可靠性以及测试效率,具有可行性。  相似文献   

9.
10.
《Software, IEEE》2003,20(2):56-57
Patterns provide a mechanism for rendering design advice in a reference format. Software design is a massive topic, and when faced with a design problem, one must be able to focus on something as close to the problem as you can get. Patterns can help by trying to identify common solutions to recurring problems. The solution is really what the pattern is, yet the problem is a vital part of the context. Patterns are half-balled - meaning one always have to finish them oneself and adapt them to ones own environment.  相似文献   

11.
Walter Brown's recollection of Atlantic Software explains how the firm was one of the earliest formed specifically to market third-party software. In many ways, the rise and fall of Software Resources Corporation epitomized the software marketing environment of the late 1960s, as Robert V. Head recounts. Lawrence Welke describes, with his account of the ICP Directories, how they began in 1967 as the first public lists of program products for sale  相似文献   

12.
After a brief overview, separate presentations are given on tools that support the testing process in a variety of ways. Some tools simulate the final execution environment as a way of expediting test execution, others automate the development of test plans, and still others collect performance data during execution. The tools address three aspects of the testing process. They provide a controlled environment in which testing can take place as well as test-data control, and some tools actually perform the tests, capturing and organizing the resulting output. The tools covered are: RUTE (Real-Time Unix Text Environment); Xray/DX; TDC (Testing and Debugging Concurrent) Ada; T; Mothra; Specification Analyzer; and Test inc  相似文献   

13.
《Software, IEEE》2004,21(6):12-13
The concept of "construction" is being a separate activity in the development process. Having something called "construction" makes it easy for some in the industry to Balkanize the concept. Construction has connotations that make it seem somewhat manual compared to the more cerebral "analysis" and "design" phases. Let's use a construction metaphor to see why that's dangerous. Software construction isn't like building houses in the suburbs. Software designers need to have coding experience in the types of systems they are designing. Designers need to work with their teams to see how well the designs actually work in practice.  相似文献   

14.
《Software, IEEE》2004,21(5):96-97
We looked at the projects requirements and thought they were straightforward. This project uses a home-brew application development system. It is an example of agile development. This framework uses a mixture of technologies: some modeling, some code generation, some reusable components, and so on. This real-time software lasts and delivers value, for as long as it's needed, and is simple and straightforward to understand, maintain, enhance and extend.  相似文献   

15.
Olsen  N.C. 《Software, IEEE》1993,10(5):29-37
The author states that, in his experience, software engineering most resembles a dynamically overloaded queue or rush-hour traffic jam. A change measurement model that is unique because it puts the development process in the context of the larger business enterprise and partitions work into dynamic tangible activities, separating work demand from work services, is proposed. Because the change-management model is based on the concept of a dynamically overloaded queue, existing mathematical approximations can be used to estimate gross process behavior. Specifically, the author proposes to use fluid-approximation models to mathematically simulate the software process as a dynamically overloaded queue, applying estimation techniques that are normally used to analyze rush-hour traffic delays  相似文献   

16.
《Software, IEEE》1992,9(4):83-85
Increased competition is forcing more software organizations to improve their processes. Along with pressure for improvement comes a demand for a cost-benefit analysis of the investment. A process-improvement program which Raytheon put in place is described. The results of a quantitative analysis involving six major projects over three years are presented. It is shown that rework costs have substantially decreased since the process-improvement program was started  相似文献   

17.
Thomas  D. Hunt  A. 《Software, IEEE》2003,20(4):82-83
Software development seems to be a discipline of artifacts; the developers spend their time producing products and attempting to find ways to measure just how well and how fast they make that product. We get pleasure from the act of creation and from activities that surround the creation process. But as time goes on, we start to lose sight of this. Companies are not interested in the process as much as the product. Managers cannot measure the thought that goes into a specification; they only see the document.  相似文献   

18.
Dromey  R.G. 《Software, IEEE》1996,13(1):33-43
Concrete and useful suggestions about what constitutes quality software have always been elusive. I suggest a framework for the construction and use of practical, testable quality models for requirements, design and implementation. Such information may be used directly to build, compare, and assess better quality software products  相似文献   

19.
《Software, IEEE》2004,21(6):86-88
Most software developers become preoccupied with the question of what makes good design at some point in their careers, usually after witnessing the effects of bad design first hand. At that point, we start to reflect. We go through a stage where we feel we know what good design is but can't really define it. Then we learn various design principles and rules of thumb that make it easier to judge what constitutes good design. But when these principles and rules conflict, we have to make trade-offs and decide what's most important in each situation. The blanket rule of thumb: Keep design as clear as possible. Regardless of the trade-offs, the most important thing was clarity. If a system uses a straightforward coding style - the classes and methods are well named and small enough to be clearly understood, and the system isn't littered with snarls of obscure code - then you can do just about anything. You can change the system with impunity, write tests for it, make adjustments, and add features, all with relative ease. So "clear design is good design" seemed like a reasonable rule of thumb because so much of what makes code impossible to maintain comes down to a lack of clarity. If you can understand your system, you can change it effectively.  相似文献   

20.
《Software, IEEE》2001,18(1):97-99
Software design is not easy-not easy to do, teach, or evaluate. Much of software education these days is about products and APIs, yet much of these are transient, whereas good design is eternal, if only we could figure out what good design is. The author has been struck by one of the underlying principles that leads to better designs: remove duplication. The principle is simple: say anything in your program only once. Stated blandly like that, it hardly bears saying. Yet identifying and removing repetition can lead to many interesting consequences. The author looks at a simple case of subroutine calls. He considers the benefits of design patterns  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号