首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper considers efficient development testing for one-shot systems (e.g., missiles) that are destroyed in testing or first normal use, where there is the possibility of reliability growth of the basic system design as a result of redesigns following failed development tests. They consider a situation where the cost of redesign is negligible, each development test produces a binary (success-failure) outcome, and there is a fixed procurement budget covering both system development and purchase. For a two-state model of system reliability, dynamic programming is used to identify test-plans that are optimal, viz, maximize the mean number of effective systems (of the final design) that can be purchased with the remaining budget when development testing is stopped. Several reasonable and easily implemented suboptimal rules are also considered, and their performances are compared to that of the optimal rule for a variety of combinations of model parameters. Optimal tests plans are easily computable, even for problems where the initial budget is large, and for some combinations of model parameters offer important improvements over more naive test heuristics. The qualitative character of the present results is anticipated to extend to more complicated and realistic models for this problem  相似文献   

2.
Three popular reliability growth models, viz. Duane's model, Weiss's model and Chernoff-Wood's model have been considered. A comprehensive quantitative analysis is presented to predict reliability growth for these models subject to cost constraint. We also outline procedure to compute an optimum corrective testing schedule for a series (1-out-of-m : F) system which maximizes reliability growth.  相似文献   

3.
The paper formulates an optimal reliability design problem for a series system made of parallel redundant subsystems. The variables for optimization are the number of redundant units in each subsystem and the reliability of each unit. There is a cost-constraint. The time for which the system reliability exceeds a specified value is to be maximized. Similarly the cost could be minimized for a constraint on the mission time and reliability. A solution method for the formulated problems is presented along with an example.  相似文献   

4.
The selection of an optimal checkpointing strategy has most often been considered in the transaction processing environment where systems are allowed unlimited repairs. In this environment an optimal strategy maximizes the time spent in the normal operating state and consequently the rate of transaction processing. This paper seeks a checkpoint strategy which maximizes the probability of critical-task completion on a system with limited repairs. These systems can undergo failure and repair only until a repair time exceeds a specified threshold, at which time the system is deemed to have failed completely. For such systems, a model is derived which yields the probability of completing the critical task when each checkpoint operation has fixed cost. The optimal number of checkpoints can increase as system reliability improves. The model is extended to include a constraint which enforces timely completion of the critical task  相似文献   

5.
Quantitative decision-making procedures are proposed to aid software project managers to manage effectively the testing stage during software project development. The module and integration testing phases are thoroughly investigated. Decision procedures which maximize the reliability and/or minimize some cost-benefit objective subject to a time and/or budget constraint are suggested. These procedures optimally allocate test time to the modules for module testing and select the optimal data mixture for integration testing. Testing of computer software is a major component of the software development effort. An efficient allocation of computer time among various modules during testing can appreciably improve reliability and shorten the testing stage. Using decision models presented in this paper, a project manager can effectively allocate test time during module testing and select the best data mixture for integration testing. The models are based upon software failure data that are collected during testing. These decision models can be valuable not only for the project manager but for the group responsible for generating the appropriate test data.  相似文献   

6.
This paper presents a model that determines the optimal budget allocation strategy for the development of new technologies, for safety-critical systems, over multiple decision periods. The case of the development of a hypersonic passenger airplane is used as an illustration. The model takes into account both the probability of technology development success as a function of the allocated budget and the probability of operational performance of the final system. It assumes that the strategy is to consider (and possibly fund) several approaches to the development of each technology to maximize the probability of development success. The model, thus, decomposes the system's development process into multiple technology development modules (one for each technology needed), each involving a number of alternative projects. There is a tradeoff between development speed and operational reliability when the budget must be allocated among alternative technology projects with different probabilities of development success and operational reliability (e.g., an easily and quickly developed technology may have little robustness). The probabilities of development and operational failures are balanced by a risk analysis approach, which allows the decision maker to optimize the budget allocation among different projects in the development program, at the beginning of each budget period. The model indicates that by considering reliability in the R&D management process, the decision maker can make better decisions, optimizing the balance between development time, cost, and robustness of safety-critical systems.  相似文献   

7.
Liberalization efforts are continuing in electricity markets all over the world. The change from monopolistic to free market structures leads to a significant increase in the trading of electrical energy. Since current power plant operation is driven by price aspects, the load flows in the networks are mainly determined by the economic situation and are no longer driven by the technical design of the networks. However, as the networks were built according to the former monopolistic market conditions and requirements, the desired load flows resulting from energy trading in the free market might cause problems when forced into the physical limitations of the actual networks, such as overloading, loop flows or stability/security limit violations. Also within the Austrian transmission grid, the effects of liberalization have led to bottlenecks caused by changes in the load flow due to the new power plant regime. To ensure a safe grid operation Verbund-Austrian Power Grid (APG), the Austrian transmission system operator, must apply cost-intensive measures for congestion management (CM). APG plans to build new 380 kV overhead lines to overcome the bottleneck situations, but the erection of new lines is delayed by extensive authorisation procedures and strong opposition by the public. In general, there are a number of approaches to overcome or mitigate load flow problems, including the instaliation of high voltage (HV) dry-type series reactors. The paper describes in some detail the design and construction of HV series reactors. It focuses on dry-type air core reactors which have become the technology of choice for many applications because of their design features and their cost effectiveness. Based on the actual situation in the Austrian transmission grid, the relevance for load flow control in the APG network is analyzed. The result shows that the present high loading of the 220 kV north to south lines within Austria can be reduced by installation of HV series reactors on condition that operational limits (e.g. node voltages and voltage angles) are kept within a certain range. Especially in combination with other measures, like the usage of phase shifting transformers and special switching conditions within the grid, the installation of (switchable) series reactors becomes an attractive option for power flow control. The comparison of cost for power plant re-dispatch versus the investment cost for series reactors provides a good reason to consider the installation of series reactors if the intended extension of the 380 kV grid cannot be realized in the short term.  相似文献   

8.
在质量管理的角度上,系统可靠度的计算是随机流网络的一个很重要的方面。一般计算网络可靠度的方法是在考虑边有可能无效,节点保持完好且没有流量的限制的情况下进行的。文中提出一种在既考虑节点流量的限制又考虑整个网络成本预算的约束条件下计算网络可靠度的方法,使其更接近实际。  相似文献   

9.
A method is presented for apportioning reliability growth to the subsystems that make up a system in order to achieve the required reliability at least cost. Reliability growth apportionment is handled as an s-expected cost minimization problem subject to the constraint of meeting a system reliability requirement. The problem is formulated in terms of Duane's reliability growth model, and is solved using geometric programming. The method can be useful in the early stages of system design to determine subsystem reliability growth that will allow a system reliability requirement to be met, and in the latter stages of system design when reliability has fallen short of the required goal and improvements are necessary.  相似文献   

10.
The system capacity of a deterministic flow network is the maximum flow from the source to the destination. In a single-commodity stochastic-flow network (branches all have several possible capacities, and may fail), the system reliability, the probability that the maximum flow is larger than or equal to a given demand, is an important performance index to measure the quality level of a network. In a two-commodity stochastic-flow network, different types of commodities are transmitted through the same network simultaneously, and compete for the capacities. We concentrate on the reliability problem for such a network subject to the budget constraint. This paper defines firstly the system capacity as a pattern. We propose a performance index, the probability that the system capacity is less than or equal to a given pattern subject to the budget constraint, to evaluate the system performance. A simple algorithm based on minimal cuts is proposed to generate all maximal vectors meeting the demand and budget constraints. The performance index can then be computed in terms of all such maximal vectors  相似文献   

11.
System reliability evaluation for flow networks is an important issue in quality management. This paper concentrates on a stochastic-flow network in which nodes as well as branches have several possible capacities, and can fail. The possibility is evaluated that a given amount of messages can be transmitted through the stochastic-flow network under the budget constraint. Such a possibility, system reliability, is a performance index for a stochastic-flow network. A minimal path, an order sequence of nodes & branches from the source to the sink without cycles, is used to assign the flow to each component (branch or node). A lower boundary point for (d, C) is a minimal capacity vector, which enables the system to transmit d messages under the budget C. Based on minimal paths, an efficient algorithm is proposed to generate all lower boundary points for (d, C). The system reliability can then be calculated in terms of all lower boundary points for (d, C) by applying the inclusion-exclusion rule. Simulation shows that the implicit enumeration (step 1) of the proposed algorithm can be executed efficiently.  相似文献   

12.
Step-stress accelerated degradation testing (SSADT) is a useful tool for assessing the lifetime distribution of highly reliable products (under a typical-use condition) when the available test items are very few. Recently, an optimal SSADT plan was proposed based on the assumption that the underlying degradation path follows a Wiener process. However, the degradation model of many materials (especially in the case of fatigue data) may be more appropriately modeled by a gamma process which exhibits a monotone increasing pattern. Hence, in practice, designing an efficient SSADT plan for a gamma degradation process is of great interest. In this paper, we first introduce the SSADT model when the degradation path follows a gamma process. Next, under the constraint that the total experimental cost does not exceed a pre-specified budget, the optimal settings such as sample size, measurement frequency, and termination time are obtained by minimizing the approximate variance of the estimated MTTF of the lifetime distribution of the product. Finally, an example is presented to illustrate the proposed method.   相似文献   

13.
Optimum software release policy with random life cycle   总被引:1,自引:0,他引:1  
A software release problem based on four software reliability growth models (SRGMs) with random life-cycle length is studied. Test of the software system is terminated after time T and released (sold) to the user at a price. The price of the software system and three cost components are considered, and average total profit is used as a criterion. The optimal values of release times are shown to be finite and unique. Hence, the optimal solutions can be obtained numerically by, for example, a bisection method. A numerical example indicates that the optimal release time increases as (1) the error rate in each model decreases and (2) the difference between the error fixing cost during the test phase and that during the operational phase increases. The case of unknown model parameters is considered only for the Jelinski-Moranda model because a Bayes model is not available for other SRGMs. The release decision depends on testing time, but other stopping rules, for example based on the number of corrected errors, can be considered  相似文献   

14.
影响机载相控阵雷达下视能力的一个重要指标是天线的超低副瓣特性。在传统的机载相控阵雷达天线的可靠性设计中是对n中取女的表决模型冗余量进行压缩,该方法没有考虑失效T/R组件的分布位置对天线超低副瓣性能的影响。通过数学归纳法建立了在收发组件失效分布约束奢件下,机载相控阵雷达天线阵的可靠性数学模型。该模型的正确性经过了仿真验证。通过实例比较说明了传统可靠性设计方法可能带来的工程设计风险。该模型可用来指导机载相控阵天线的可靠性设计。  相似文献   

15.
This paper presents optimization models for selecting a subset of software libraries, viz, collections of programs, residing on floppy disks or compact disks, available on the market. Each library contains a variety of programs whose reliabilities are assumed to be known. The objective is to maximize the reliability of the computer system subject to a budget constraint on the total cost of the libraries selected. The paper includes six models, each of which applies to a different software structure and assumptions. A detailed branch and bound algorithm for solving one of the six models is described; it contains a simple greedy-procedure for generating an initial solution  相似文献   

16.
Optimal design for step-stress accelerated degradation tests   总被引:4,自引:0,他引:4  
Today, many products are designed to function for a long period of time before they fail. For such highly-reliable products, collecting accelerated degradation test (ADT) data can provide useful reliability information. However, it usually requires a moderate sample size to implement an ADT. Hence, ADT is not applicable for assessing the lifetime distribution of a newly developed or very expensive product which only has a few available test units on hand. Recently, a step-stress ADT (SSADT) has been suggested in the literature to overcome the above difficulty. However, in designing an efficient SSADT experiment, the issue about how to choose the optimal settings of variables was not discussed, such as sample size, measurement frequency, and termination time. In this study, we first use a stochastic diffusion process to model a typical SSADT problem. Next, under the constraint that the total experimental cost does not exceed a predetermined budget, the optimal settings of these variables are obtained by minimizing the asymptotic variance of the estimated 100p/sup th/ percentile of the product's lifetime distribution. Finally, an example is used to illustrate the proposed method.  相似文献   

17.
Burn-in optimization under reliability and capacity restrictions   总被引:1,自引:0,他引:1  
Burn-in is a method to screen out early failures of electronic components. The burn-in problems that minimize the system life-cycle cost have been investigated reasonably well in many applications, but physical constraints during the decision process have not been considered. The authors search for optimal burn-in time and develop a cost-optimization model. Two types of constraint are to be satisfied during decision making: (1) the minimum system reliability requirement, and (2) the maximum capacity available for burn-in. Guidelines are suggested for making burn-in decisions. An example is given to illustrate a practical application. The model generalizes the burn-in problems that were oversimplified in a previous study  相似文献   

18.
The nonhomogeneous error detection rate model has been extensively used in software reliability modelling. An important management responsibility is to make a decision about the release of software, so as to result in maximum cost effectiveness. It is well known that the effort for correction of an error increases, rather heavily, from an initial testing phase to a final testing phase and then to the operational phase. In this paper, a method is presented to systematically determine this optimum release instant. The fact that some faults can be regenerated during the process of correction has also been considered in the modelling—this has been ignored in previous literature. Also, the partition of the testing phase into initial and final phases is considered to be desirable as effort per error correction will be significantly different in these two phases. An example illustrates the entire procedure for various values of the total number of errors and the trends of cost and release time.  相似文献   

19.
Network topological optimization with a reliability constraint is considered. The objective is to find the topological layout of links, at a minimal cost, under the constraint that the network reliability is not less than a given level of system reliability. A decomposition method, based on branch and bound, is used for solving the problem. In order to speed up the procedure, an upper bound on system reliability, in terms of node degrees, is applied. A numerical example illustrates the effectiveness of the method  相似文献   

20.
The determination of the reliability level at which to manufacture the components of a coherent structure so that the system reliability h(p) is at a certain level and the overall system cost is minimized is considered. The cost of utilizing component ci at reliability level pi, Ci(pi), is assumed to be a convex increasing function of pi with a continuous first derivative and Ci'(qi)>0 where qi is the lower bound on the reliability level for component ci. Since for most coherent structures the constraint set defines a nonconvex set, any mathematical programming procedure blindly applied to the program converges to a local optimum rather than a global optimum. However, in certain cases, the global optimum can be found for the series and parallel (SP) type of systems. The key to the solution is to optimize each module separately and then to substitute a component for each module where the cost function for the component is the value of the objective function for the module. As long as the cost function for each module maintains the convexity property with In R or In(1 - R) as the argument (R being the reliability of the module), the optimization procedure can continue and a global optimum found.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号