共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
Themis C. Genadis 《Quality and Reliability Engineering International》1988,4(4):311-316
Designing reliable software is becoming an ever increasing problem because the high cost of software is largely due to reliability problems. The costs of finding and fixing the errors, better known as maintenance and testing costs, account for as much as 80 per cent of the total cost of the final software product. Software developers now have an even greater interest in preventing errors from making their way into the software and finding the errors that are present in the early stages of development. Precise software design, coding and testing play an important role. This paper presents a management plan for implementing a software reliability programme at a small software industry where no reliability programme yet exists and no reliability programme is yet established. 相似文献
3.
Robust recurrent neural network modeling for software fault detection and correction prediction 总被引:1,自引:0,他引:1
Software fault detection and correction processes are related although different, and they should be studied together. A practical approach is to apply software reliability growth models to model fault detection, and fault correction process is assumed to be a delayed process. On the other hand, the artificial neural networks model, as a data-driven approach, tries to model these two processes together with no assumptions. Specifically, feedforward backpropagation networks have shown their advantages over analytical models in fault number predictions. In this paper, the following approach is explored. First, recurrent neural networks are applied to model these two processes together. Within this framework, a systematic networks configuration approach is developed with genetic algorithm according to the prediction performance. In order to provide robust predictions, an extra factor characterizing the dispersion of prediction repetitions is incorporated into the performance function. Comparisons with feedforward neural networks and analytical models are developed with respect to a real data set. 相似文献
4.
John B. Bowen 《Quality and Reliability Engineering International》1987,3(1):41-51
This paper reports the results of applying a multi-model interactive computer program called SMERFS (statistical modelling and estimation of reliability functions for software) to a large air defence system project. SMERFS includes four fault-count models and four time-between-failure models which were applied at the compilation unit and computer software configuration item level. The four fault-count models yielded reasonable results supported by acceptable fits for nine of the ten compilation units. The four time-between-failure models yielded reasonable results with acceptable fits for all compilation units. A validation of the SMERFS results was attempted by truncating the latter developmental data from the input data, and by using the latter data in lieu of operational data. With the exception of one fault-count model, no models accurately predicted the simulated operational data. 相似文献
5.
Wolfgang Weber Heidemarie Tondok Michael Bachmayer 《Reliability Engineering & System Safety》2005,89(1):57-70
The fault tree analysis is a well-established method in system safety and reliability assessment. We transferred the principles of this technique to an assembler code analysis, regarding any incorrect output of the software as the undesired top-level event. Starting from the instructions providing the outputs and tracking back to all instructions contributing to these outputs a hierarchical system of references is generated that may graphically be represented as a fault tree. To cope with the large number of relations in the code, a tool suite has been developed, which automatically creates these references and checks for unfulfilled preconditions of instructions. The tool was applied to the operational software of an inertial measurement unit, which provides safety critical signals for artificial stabilization of an aircraft. The method and its implementation as a software tool is presented and the benefits, surprising results, and limitations we have experienced were discussed. 相似文献
6.
Michael A. Friedman 《Quality and Reliability Engineering International》1992,8(5):413-418
The concept of a computer program's ‘hazard rate profile’ is introduced. A software fault's hazard rate is the amount the fault contributes to the overall program failure rate. The hazard rate profile describes the relationship between the program's failure rate and fault content by summarizing the relative proportion of software fault hazard rates that fall into different hazard rate classes. The usefulness of the hazard rate profile concept is shown by applying it to the tasks of software reliability prediction, growth modelling and allocation. 相似文献
7.
We discuss optimal software release problems which consider both a present value and a warranty period (in the operational phase) during which the developer has to pay the cost for fixing any faults detected. It is very important with respect to software development management that we solve an optimal software testing time by integrating the total expected testing cost and the reliability requirement. We apply a nonhomogeneous Poisson process model to the formulation of a software cost model and analyze three typical cases of the cost model. Moreover, we derive several optimal release polices. Finally, numerical examples are shown to illustrate the results of the optimal policies. 相似文献
8.
Software reliability is an important aspect of any complex equipment today. Software reliability is usually estimated based on reliability models such as nonhomogeneous Poisson process (NHPP) models. Software systems are improving in testing phase, while it normally does not change in operational phase. Depending on whether the reliability is to be predicted for testing phase or operation phase, different measure should be used. In this paper, two different reliability concepts, namely, the operational reliability and the testing reliability, are clarified and studied in detail. These concepts have been mixed up or even misused in some existing literature. Using different reliability concept will lead to different reliability values obtained and it will further lead to different reliability-based decisions made. The difference of the estimated reliabilities is studied and the effect on the optimal release time is investigated. 相似文献
9.
Alan Wingrove 《Quality and Reliability Engineering International》1985,1(2):93-97
Projects with a high software content are frequently completed late, with overrun costs and inadequate performance. Much of this is due to inadequate management, mainly caused by a lack of availability or knowledge of the criteria, methods and tools on which effective management should be based. Examples are given of the most frequent problems met by project managers. A plea is made for a vigorous effort to be made in developing management strategies which will allow the tools and techniques now being researched and developed to be used effectively for producing the high quality software needed for systems. 相似文献
10.
This paper presents the similarities and differences between hardware, software and system reliability. Relative contributions to system failures are shown for software and hardware and failure and recovery propensities are also discussed. Reliability, availability and maintainability (RAM) concepts have been broadly developed for software reliability than hardware reliability. Extending these software concepts to hardware and system reliability helps in examining the reliability of complex systems. The paper concludes with assurance techniques for defending against faults. Most of the techniques discussed originate in software reliability but apply to all aspects of a system. Also, the effects of redundancy on overall system availability are shown. 相似文献
11.
Using predeveloped software, a digital safety system is designed that meets the quality standards of a safety system. To demonstrate the quality, the design process and operating history of the product are reviewed along with configuration management practices. The application software of the safety system is developed in accordance with the planned life cycle. Testing, which is a major phase that takes a significant time in the overall life cycle, can be optimized if the testability of the software can be evaluated. The proposed testability measure of the software is based on the entropy of the importance of basic statements and the failure probability from a software fault tree. To calculate testability, a fault tree is used in the analysis of a source code. With a quantitative measure of testability, testing can be optimized. The proposed testability can also be used to demonstrate whether the test cases based on uniform partitions, such as branch coverage criteria, result in homogeneous partitions that is known to be more effective than random testing. In this paper, the testability measure is calculated for the modules of a nuclear power plant's safety software. The module testing with branch coverage criteria required fewer test cases if the module has higher testability. The result shows that the testability measure can be used to evaluate whether partitions have homogeneous characteristics. 相似文献
12.
Approximate estimation of system reliability via fault trees 总被引:1,自引:0,他引:1
In this article, we show how fault tree analysis, carried out by means of binary decision diagrams (BDD), is able to approximate reliability of systems made of independent repairable components with a good accuracy and a good efficiency. We consider four algorithms: the Murchland lower bound, the Barlow-Proschan lower bound, the Vesely full approximation and the Vesely asymptotic approximation. For each of these algorithms, we consider an implementation based on the classical minimal cut sets/rare events approach and another one relying on the BDD technology. We present numerical results obtained with both approaches on various examples. 相似文献
13.
Sungdeok Cha Hanseong Son Junbeom Yoo Eunkyung Jee Poong Hyun Seong 《Reliability Engineering & System Safety》2003,82(1):11-20
Fault tree analysis, the most widely used safety analysis technique in industry, is often applied manually. Although techniques such as cutset analysis or probabilistic analysis can be applied on the fault tree to derive further insights, they are inadequate in locating flaws when failure modes in fault tree nodes are incorrectly identified or when causal relationships among failure modes are inaccurately specified. In this paper, we demonstrate that model checking technique is a powerful tool that can formally validate the accuracy of fault trees. We used a real-time model checker UPPAAL because the system we used as the case study, nuclear power emergency shutdown software named Wolsong SDS2, has real-time requirements. By translating functional requirements written in SCR-style tabular notation into timed automata, two types of properties were verified: (1) if failure mode described in a fault tree node is consistent with the system's behavioral model; and (2) whether or not a fault tree node has been accurately decomposed. A group of domain engineers with detailed technical knowledge of Wolsong SDS2 and safety analysis techniques developed fault tree used in the case study. However, model checking technique detected subtle ambiguities present in the fault tree. 相似文献
14.
Software reliability assessment models in use today treat software as a monolithic block. An aversion towards ‘atomic' models seems to exist. These models appear to add complexity to the modeling, to the data collection and seem intrinsically difficult to generalize. In 1997, we introduced an architecturally based software reliability model called FASRE. The model is based on an architecture derived from the requirements which captures both functional and nonfunctional requirements and on a generic classification of functions, attributes and failure modes. The model focuses on evaluation of failure mode probabilities and uses a Bayesian quantification framework. Failure mode probabilities of functions and attributes are propagated to the system level using fault trees. It can incorporate any type of prior information such as results of developers' testing, historical information on a specific functionality and its attributes, and, is ideally suited for reusable software. By building an architecture and deriving its potential failure modes, the model forces early appraisal and understanding of the weaknesses of the software, allows reliability analysis of the structure of the system, provides assessments at a functional level as well as at a systems' level. In order to quantify the probability of failure (or the probability of success) of a specific element of our architecture, data are needed. The term element of the architecture is used here in its broadest sense to mean a single failure mode or a higher level of abstraction such as a function. The paper surveys the potential sources of software reliability data available during software development. Next the mechanisms for incorporating these sources of relevant data to the FASRE model are identified. 相似文献
15.
Joel A. Nachlas 《Quality and Reliability Engineering International》1985,1(3):191-194
A model is developed to represent computer memory module reliability as a function of memory array reliability under a fault tolerant design. The fault tolerance feature of the array actually results from a revision in the use of the array so that with respect to some failure modes, the array becomes a K out of N rather than a series system. The model is used to determine array reliability under fault tolerance. The ratio of module reliability under fault tolerance to that without this feature is used as a measure of the benefits of revising array use. A key feature of the analysis is the fact that not all faults can be tolerated. The elemental memory devices examined conform to a decreasing Weibull hazard model. Consequently, evaluation of the general model for the K out of N system realized must be done numerically. However, for the special case in which K=N-1, a closed form expression for the performance measure is obtained. This special case occurs for the application of interest and it is shown that the performance measure always exceeds one and depends directly upon the proportion of faults that can be tolerated. Thus the value of fault tolerance is shown to depend upon the extent to which the array will tolerate faults. This provides a basis for deciding whether or not fault tolerance should be implemented. 相似文献
16.
Allen M. Johnson Michael A. Schoenfelder David J. Lebold 《Quality and Reliability Engineering International》1993,9(1):55-62
The Rainbow net simulation technique is applied to modelling the impact of system load and fault handling on the availability of a fault-tolerant multiprocessor architecture. Rainbow nets are described along with the motivation for creating this modelling technique. A Rainbow net fault-handling model is created for the fault-tolerant multiprocessor architecture and the topology is shown to remain constant in size, independent of the number of processor, memory and I/O elements configured in the system. Simulation is performed with a varying load in terms of the number of active jobs the system must support. Results are given showing how the fault-tolerant capability varies with load. Two new metrics for evaluating fault tolerance are introduced; namely full fault-tolerability and partial fault-tolerability. They are based on simple observations in the model. 相似文献
17.
In this paper, a software cost model with a warranty period and cost, a cost to remove each error detected in the software and a risk cost due to software failure is developed. A software reliability model based on the nonhomogeneous Poisson process is used. The optimal release policies which minimize the expected total software costs are presented. A software tool is also developed using Excel and Visual Basic that facilitates the task of determining the optimal software release time. Numerical examples are provided to illustrate the results. 相似文献
18.
J. B. Camargo Jr. E. Canzian J. R. Almeida Jr. S. M. Paz B. A. Basseto 《Reliability Engineering & System Safety》2001,74(1):106
In this paper, a quantitative methodology to be considered in safety-critical microprocessor applications is proposed. Some important aspects that must be considered in safety analysis work are discussed. We discuss how to evaluate the dangerous detectable and undetectable system failure rates in a single microprocessor board and the mean time to unsafe failure (MTTUF) of a critical system. The proposed methodology is finally applied to a practical system which employs a triple modular redundancy (TMR) architecture. The results obtained by employing this methodology are extremely relevant, especially to those aspects related to the impact of the computational blocks in the final safety integrity level (SIL) of a critical system. In this paper, we also consider how the software can influence the evaluation of the fault cover factor, another important aspect in safety analysis work. 相似文献
19.
For the last three decades, reliability growth has been studied to predict software reliability in the testing/debugging phase. Most of the models developed were based on the non-homogeneous Poisson process (NHPP), and S-shaped type or exponential-shaped type of behavior is usually assumed. Unfortunately, such models may be suitable only for particular software failure data, thus narrowing the scope of applications. Therefore, from the perspective of learning effects that can influence the process of software reliability growth, we considered that efficiency in testing/debugging concerned not only the ability of the testing staff but also the learning effect that comes from inspecting the testing/debugging codes. The proposed approach can reasonably describe the S-shaped and exponential-shaped types of behaviors simultaneously, and the results in the experiment show good fit. A comparative analysis to evaluate the effectiveness for the proposed model and other software failure models was also performed. Finally, an optimal software release policy is suggested. 相似文献
20.
With the growing intolerance to failures within systems, the issue of fault diagnosis has become ever prevalent. Information concerning these possible failures can help to minimise the disruption to the functionality of the system by allowing quick rectification. Traditional approaches to fault diagnosis within engineering systems have focused on sequential testing procedures and real-time mechanisms. Both methods have been predominantly limited to single fault causes. Latest approaches also consider the issue of multiple faults in reflection to the characteristics of modern day systems designed for high reliability. In addition, a diagnostic capability is required in real time and for changeable system functionality. This paper focuses on two approaches which have been developed to cater for the demands of diagnosis within current engineering systems, namely application of the fault tree analysis technique and the method of digraphs. Both use a comparative approach to consider differences between actual system behaviour and that expected. The procedural guidelines are discussed for each method, with an experimental aircraft fuel system used to test and demonstrate the features of the techniques. The effectiveness of the approaches is compared and their future potential highlighted. 相似文献