首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper, we model embedded system design and optimization, considering component redundancy and uncertainty in the component reliability estimates. The systems being studied consist of software embedded in associated hardware components. Very often, component reliability values are not known exactly. Therefore, for reliability analysis studies and system optimization, it is meaningful to consider component reliability estimates as random variables with associated estimation uncertainty. In this new research, the system design process is formulated as a multiple-objective optimization problem to maximize an estimate of system reliability, and also, to minimize the variance of the reliability estimate. The two objectives are combined by penalizing the variance for prospective solutions. The two most common fault-tolerant embedded system architectures, N-Version Programming and Recovery Block, are considered as strategies to improve system reliability by providing system redundancy. Four distinct models are presented to demonstrate the proposed optimization techniques with or without redundancy. For many design problems, multiple functionally equivalent software versions have failure correlation even if they have been independently developed. The failure correlation may result from faults in the software specification, faults from a voting algorithm, and/or related faults from any two software versions. Our approach considers this correlation in formulating practical optimization models. Genetic algorithms with a dynamic penalty function are applied in solving this optimization problem, and reasonable and interesting results are obtained and discussed.  相似文献   

2.
The paper introduces a new model of fault level coverage for multi-state systems in which the effectiveness of recovery mechanisms depends on the coexistence of multiple faults in related elements. Examples of this effect can be found in computing systems, electrical power distribution networks, pipelines carrying dangerous materials, etc. For evaluating reliability and performance indices of multi-state systems with imperfect multi-fault coverage, a modification of the generalized reliability block diagram (RBD) method is suggested. This method, based on a universal generating function technique, allows performance distribution of complex multi-state series–parallel system with multi-fault coverage to be obtained using a straightforward recursive procedure. Illustrative examples are presented.  相似文献   

3.
In reliability analysis, the stress-strength model is often used to describe the life of a component which has a random strength (X) and is subjected to a random stress (Y). In this paper, we considered the problem of estimating the reliability R=P [Y<X] when the distributions of both stress and strength are independent and follow exponentiated Pareto distribution. The maximum likelihood estimator of the stress strength reliability is calculated under simple random sample, ranked set sampling and median ranked set sampling methods. Four different reliability estimators under median ranked set sampling are derived. Two estimators are obtained when both strength and stress have an odd or an even set size. The two other estimators are obtained when the strength has an odd size and the stress has an even set size and vice versa. The performances of the suggested estimators are compared with their competitors under simple random sample via a simulation study. The simulation study revealed that the stress strength reliability estimates based on ranked set sampling and median ranked set sampling are more efficient than their competitors via simple random sample. In general, the stress strength reliability estimates based on median ranked set sampling are smaller than the corresponding estimates under ranked set sampling and simple random sample methods.  相似文献   

4.
A method and an algorithm accounting for the results of measurements of parameters of information measuring systems (IMS) and their errors in the course of calculating reliability indices are considered. The method utilizes the totality of estimators of the parameters of an IMS, in particular the forecasted estimators of parameters or estimators obtained from statistical dependencies among the IMS parameters.Translated from Izmeritel'naya Tekhnika, No. 3, pp. 16–17, March, 1995.  相似文献   

5.
For electronic equipment and systems a correspondance of predominant faults is observed during the electrical test phases at board and at system level prior to use, and during subsequent application of the system. Relationships are expected based on the common causes of these faults. Such faults might be those that have escaped detection during electrical testing, faults that induce secondary latent defects on parts, or original latent defects incorporated in the installed components. Causes and effects are discussed. Measures have been taken over several years to improve the manufacturing quality of electronic equipment following these aspects. It is shown, by practical experience with an electronic communication system, that an improvement of manufacturing quality by a factor of 5 has increased system reliability by about the same degree. This noticeable effect of the manufacturing quality of printed circuit board assemblies on the reliability of the system and on the failure rates of installed components is not considered by usual reliability prediction models. The observed failure rates of components are low when compared to values of prediction models, even including those component types contributing to focal points of failure with the achieved level of reliability.  相似文献   

6.
The voting system studied consists of n voting units each either providing a binary decision (0 or 1) or abstaining from voting. The system output is 1 if the cumulative weight of all 1-opting units is at least a pre-specified fraction τ of the cumulative weight of all non-abstaining units. Otherwise, the system output is 0.In this paper, we study the effect of limited availability of the voting units on the entire voting system reliability. Two different types of systems are considered. In the system of type 1, the absence of unit output (unit unavailability) is interpreted by the system as abstention from voting. In the system of type 2, the unavailable state of the voting unit and its abstention from voting can be distinguished and the system parameters can be adjusted to optimize its performance for each combination of available units.There are two ways to improve reliability of weighted voting system consisting of units with the given output probability distribution: optimization of system parameters (weights of units and threshold factor value) and units availability enhancement (for example, by choosing proper maintenance policy). This paper shows a method of incorporating information about units' availability into a procedure for determining the optimal system parameters. It also presents a method for determining indices that measure importance of voting units availability for both types of systems. These indices indicate voting units for which efforts of availability enhancement are the most beneficial from the entire system reliability improvement point of view.The approach is based on using a universal generating function technique and optimization procedure presented in [5]. Examples are presented.  相似文献   

7.
Software reliability growth models, which are based on nonhomogeneous Poisson processes, are widely adopted tools when describing the stochastic failure behavior and measuring the reliability growth in software systems. Faults in the systems, which eventually cause the failures, are usually connected with each other in complicated ways. Considering a group of networked faults, we raise a new model to examine the reliability of software systems and assess the model's performance from real‐world data sets. Our numerical studies show that the new model, capturing networking effects among faults, well fits the failure data. We also formally study the optimal software release policy using the multi‐attribute utility theory (MAUT), considering both the reliability attribute and the cost attribute. We find that, if the networking effects among different layers of faults were ignored by the software testing team, the best time to release the software package to the market would be much later while the utility reaches its maximum. Sensitivity analysis is further delivered.  相似文献   

8.
The design, evaluation and implementation of a busbar differential protection relay that operates in conjunction with a current transformer (CT) compensating algorithm are described. Prior to saturation, the secondary current of a CT is not compensated. The compensating algorithm detects the start of first saturation on the basis of the third-difference function of the current and estimates the core flux at the first saturation start by inserting the negative value of the third- difference function of the current into the magnetisation curve of a CT. Thereafter, it calculates the core flux and then the corresponding magnetising current in conjunction with the magnetisation curve. The calculated magnetising current is added to the measured secondary current to obtain the correct secondary current. The algorithm can estimate the correct current irrespective of the level of the remanent flux. In the proposed busbar protection scheme, a current differential relay with the single-slope operating characteristic is used on the basis of the compensated current of the saturated CT. Test results indicate that the relay shows satisfactory performance for the various external and internal faults with CT saturation, particularly in the case of a progressive fault from a feeder fault to a busbar fault. The algorithm is implemented in a prototype relay based on a digital signal processor. The relay achieves greater stability on external faults, enhanced sensitivity on internal faults and fast operation on internal faults with CT saturation.  相似文献   

9.
Usually engineers try to achieve the required reliability level with minimal cost. The problem of total investment cost minimization, subject to reliability constraints, is well known as the reliability optimization problem. When applied to multi‐state systems (MSS), the system has many performance levels, and reliability is considered as a measure of the ability of the system to meet the demand (required performance). In this case, the outage effect will be essentially different for units with different performance rate. Therefore, the performance of system components, as well as the demand, should be taken into account. In this paper, we present a technique for solving a family of MSS reliability optimization problems, such as structure optimization, optimal expansion, maintenance optimization and optimal multistage modernization. This technique combines a universal generating function (UGF) method used for fast reliability estimation of MSS and a genetic algorithm (GA) used as an optimization engine. The UGF method provides the ability to estimate relatively quickly different MSS reliability indices for series‐parallel and bridge structures. It can be applied to MSS with different physical nature of system performance measure. The GA is a robust, universal optimization tool that uses only estimates of solution quality to determine the direction of search. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

10.
An innovative approach is presented for the reliability analysis of aging multistate systems that considers the subsystems and their components' dependency. A reliability function is determined for an aging series system with the component dependency following the local load‐sharing rule, and a reliability function is determined for an aging “m out of n” system with the component dependency following the equal load‐sharing rule. Linking the results of those load‐sharing models, a mixed‐dependency model for multistate “m out of l”‐series systems is constructed by assuming the dependence between subsystems connected in series under the local load‐sharing rule and the dependence between their components under the equal load‐sharing rule. As a special case, the reliability of this system, modeled using piecewise exponential reliability functions, is considered, and the results are applied to characterize shipyard rope elevator reliability. Finally, the maintenance of this elevator as a repairable multistate system is analyzed with the time of renovation ignored.  相似文献   

11.
Markov models are an established part of current systems reliability and availability analysis. They are extensively used in various applications, including, in particular, electrical power supply systems. One of their advantages is that they considerably simplify availability evaluation so that the availability of very large and complex systems can be computed. It is generally assumed, with some justification, that the results obtained from such Markov reliability models are relatively robust. It has, however, been known for some time, that practical time to failure distributions are frequently non-exponential, particular attention being given in much reliability work to the Weibull family. Morover, recently additional doubt has been case on the validity of the Markov approach, both because of the work of Professor Kline and others on the non-exponentiality of practical repair time distribution, and because of the advantages to be obtained in terms of modelling visibility of the alternative simulation approach. In this paper we employ results on the ability of the k-out-of-n systems to span the coherent set to investigate the robustness of Markov reliability models based upon a simulation investigation of coherent systems of up to 10 identical components. We treat the case where adequate repair facilities are available for all components. The effects upon the conventional transient and steady-state measures of Weibull departures from exponentiality are considered. In general, the Markov models are found to be relatively robust, with alterations to failure distributions being more important than those to repair distributions, and decreasing hazard rates more critical than increasing hazard rates. Of the measures studied, the mean time to failure is most sensitive to variations in distributional shape.  相似文献   

12.
Binary capacitated two-terminal reliability at demand level d (2TRd) is defined as the probability that network capacity, generated by binary capacitated components, between specified source and sink nodes is greater than or equal to a demand of d units. For the components that comprise these networks, reliability estimates are usually obtained from some source of testing. For these estimates and depending on the type of testing, there is an associated uncertainty that can significantly affect the overall estimation of 2TRd. That is, an accurate estimate of 2TRd is highly dependent on the uncertainty associated to the reliability of the network components. Current methods for the estimation of network reliability and associated uncertainty are restricted to the case where the network follows a series-parallel architecture and the components are binary and non-capacitated. For different capacitated network designs, an estimate on 2TRd can only be approximated for specific scenarios. This paper presents a bounding approach for 2TRd by explaining how component reliability and associated uncertainty impact estimates at the network level. The proposed method is based on a structured approach that generates a α-level confidence interval (CI) for binary capacitated two-terminal network reliability. Simulation results on different test networks show that the proposed methods can be used to develop very accurate bounds of two-terminal network reliability.  相似文献   

13.
This study proposes and applies an evolutionary-based approach for multiobjective reconfiguration in electrical power distribution networks. In this model, two types of indicators of power quality are minimised: (i) power system's losses and (ii) reliability indices. Four types of reliability indices are considered. A microgenetic algorithm ('GA) is used to handle the reconfiguration problem as a multiobjective optimisation problem with competing and non-commensurable objectives. In this context, experiments have been conducted on two standard test systems and a real network. Such problems characterise typical distribution systems taking into consideration several factors associated with the practical operation of medium voltage electrical power networks. The results show the ability of the proposed approach to generate well-distributed Pareto optimal solutions to the multiobjective reconfiguration problem. In the systems adopted for assessment purposes, our proposed approach was able to find the entire Pareto front. Furthermore, better performance indexes were found in comparison to the Pareto envelope-based selection algorithm 2 (PESA 2) technique, which is another well-known multiobjective evolutionary algorithm available in the specialised literature. From a practical point of view, the results established, in general, that a compact trade-off region exists between the power losses and the reliability indices. This means that the proposed approach can recommend to the decision maker a small set of possible solutions in order to select from them the most suitable radial topology.  相似文献   

14.
A model is developed to represent computer memory module reliability as a function of memory array reliability under a fault tolerant design. The fault tolerance feature of the array actually results from a revision in the use of the array so that with respect to some failure modes, the array becomes a K out of N rather than a series system. The model is used to determine array reliability under fault tolerance. The ratio of module reliability under fault tolerance to that without this feature is used as a measure of the benefits of revising array use. A key feature of the analysis is the fact that not all faults can be tolerated. The elemental memory devices examined conform to a decreasing Weibull hazard model. Consequently, evaluation of the general model for the K out of N system realized must be done numerically. However, for the special case in which K=N-1, a closed form expression for the performance measure is obtained. This special case occurs for the application of interest and it is shown that the performance measure always exceeds one and depends directly upon the proportion of faults that can be tolerated. Thus the value of fault tolerance is shown to depend upon the extent to which the array will tolerate faults. This provides a basis for deciding whether or not fault tolerance should be implemented.  相似文献   

15.
Shuffle-exchange networks (SENs) have been widely considered as practical interconnection systems due to their size of its switching elements (SEs) and uncomplicated configuration. SEN is a network among a large class of topologically equivalent multistage interconnection networks (MINs) that includes omega, indirect binary n-cube, baseline, and generalized cube. In this paper, SEN with additional stages that provide more redundant paths are analyzed. A common network topology with a 2×2 basic building block in a SEN and its variants in terms of extra-stages is investigated. As an illustration, three types of SENs are compared: SEN, SEN with an additional stage (SEN+), and SEN with two additional stages (SEN+2). Finally, three measures of reliability: terminal, broadcast, and network reliability for the three SEN systems are analyzed.  相似文献   

16.
Life data from systems of components are often analysed to estimate the reliability of the individual components. These estimates are useful since they reflect the reliability of the components under actual operating conditions. However, owing to the cost or time involved with failure analysis, the exact component causing system failure may be unknown or ‘masked’. That is, the cause may only be isolated to some subset of the system's components. We present an iterative approach for obtaining component reliability estimates from such data for series systems. The approach is analogous to traditional probability plotting. That is, it involves the fitting of a parametric reliability function to a set of nonparametric reliability estimates (plotting points). We present a numerical example assuming Weibull component life distributions and a two-component series system. In this example we find estimates with only 4 per cent of the computation time required to find comparable MLEs.  相似文献   

17.
This paper focuses on a comparative analysis of the reliability associated with the evolution of corrosion between normal and high-strength pipe material. The use of high strength steel grades such as X100 and X120 for high pressure gas pipeline in the arctic is currently being considered. To achieve this objective, a time-dependent reliability analysis using variable Y/T ratios in a multiaxial finite strain analysis of thin-walled pipeline is performed. This analysis allows for the consideration of longitudinal grooves and the presence of companion axial tension and bending loads. Limit states models are developed based on suitable strain hardening models for the ultimate behavior of corroded medium and high strength pipeline material. In an application, the evolution of corrosion is modeled in pipelines of different grades that have been subjected to an internal corrosion inspection after a specified time which allows for a Bayesian updating of long-term corrosion estimates and, hence, the derivation of annual probabilities of failure as a function of time. The effect of grade and Y/T is clearly demonstrated.  相似文献   

18.
Designing products which require maintenance always involves compromises between reliability and maintainability. Both scheduled and preventive maintenance (PM) should be considered in the design phases of a product so that the design can include features to ease the maintenance task. In addition, many design decisions based on Failure Modes and Effects Criticality Analysis (FMECA), Pareto criticality rankings, etc., could and should be strongly influenced by the potential for using preventive maintenance. A component that has a major negative impact on system reliability (because of its life distribution) could become much less consequential if appropriate PM policies are implemented. This paper describes the use of an easy-to-implement analysis procedure to assist a designer or systems analyst in making the reliability/maintainability tradeoff.  相似文献   

19.
Modeling of system lifetimes becomes more complicated when external events can cause the simultaneous failure of two or more system components. Models that ignore these common cause failures lead to methods of analysis that overestimate system reliability. Typical data consist of observed frequencies in which i out of m (identical) components in a system failed simultaneously, i = 1,…, m. Because this attribute data is inherently dependent on the number of components in the system, procedures for interpretation of data from different groups with more or fewer components than the system under study are not straightforward. This is a recurrent problem in reliability applications in which component configurations change from one system to the next. For instance, in the analysis of a large power-supply system that has three stand-by diesel generators in case of power loss, statistical tests and estimates of system reliability cannot be derived easily from data pertaining to different plants for which only one or two diesel generators were used to reinforce the main power source. This article presents, discusses, and analyzes methods to use generic attribute reliability data efficiently for systems of varying size.  相似文献   

20.
Adequate operating reserve is required in an electric power system in order to maintain a desired level of reliability throughout a given period of time. Interruptible load can be considered as a part of the system operating reserve if it is required. The inclusion of interruptible load in the assessment of unit commitment in interconnected systems is illustrated in this paper using a well-being framework. A technique is presented to determine the well-being indices of both isolated and interconnected systems with the inclusion of interruptible load. The impacts on the required number of committed units and the well-being indices of the amount of interruptible load and the corresponding interruption time are examined in this paper by application to a hypothetical system and to the IEEE-RTS. © 1998 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号