首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The computation of the reliability of two-terminal networks is a classical reliability problem. For these types of problems, one is interested, from a general perspective, in obtaining the probability that two specific nodes can communicate. This paper presents a holistic algorithm for the analysis of general networks that follow a two-terminal rationale. The algorithm is based on a set replacement approach and an element inheritance strategy that effectively obtains the minimal cut sets associated with a given network. The vast majority of methods available for obtaining two-terminal reliability are generally based on assumptions about the performance of the network. Some methods assume network components can be in one of two states: (i) either completely failed; or (ii) perfectly functioning, others usually assume that nodes are perfectly reliable and thus, these methods have to be complemented or transformed to account for node failure, and the remaining methods assume minimal cut sets can be readily computed in order to analyze more complex network and component behavior. The algorithm presented in this paper significantly differs from previous approaches available in the literature in the sense that it is based on a predecessor matrix and an element substitution technique that allows for the exact computation of minimal cut sets and the immediate inclusion of node failure without any changes to the pseudo-code. Several case networks are used to validate and illustrate the algorithms.  相似文献   

2.
The two-terminal reliability problem assumes that a network and its elements are either in a working or a failed state. However, many practical networks are built of elements that may operate in more than two states i.e., elements may be degraded but still functional. Multistate two-terminal reliability at demand level d (M2TRd) can be defined as the probability that the system capacity generated by multistate components is greater than or equal to a demand of d units. This paper presents a fully multistate-based algorithm that obtains the multistate equivalent of binary path sets, namely, Multistate Minimal Path Vectors (MMPVs), for the M2TRd problem. The algorithm mimics natural organisms in the sense that a select number of arcs inherit information from other specific arcs contained in a special set called the “primary set.” The algorithm is tested and compared with published results in the literature. Two features of the algorithm make it relevant: (i) unlike other approaches, it does not depend on an a priori knowledge of the binary path sets to obtain the MMPVs; and (ii) the use of an information sharing approach and network reduction technique significantly reduce the number of vector analyses needed to obtain all the component levels that guarantee system success. Additionally, the complexities associated with the computation of reliability are discussed. A Monte Carlo simulation approach is used to obtain an accurate estimate of actual M2TR values based on MMPVs. Examples are used to validate the algorithm and the simulation procedure.  相似文献   

3.
Many real-world systems are multistate systems composed of multistate components in which the reliability can be computed in terms of the lower bound points of level d, called d-MCs. Such systems (electric power, transportation, etc.) may be regarded as flow networks whose arcs have independent, discrete, limited and multivalued random capacities. In this study, all MCs are assumed to be known in advance and we focused on how to find the entire d-MCs before calculating the reliability value of a network. Just based on the definition of d-MC, we develop an intuitive algorithm which is better than the best-known existing method. Analysis of our algorithm and comparison to existing algorithms shows that our proposed method is easier to understand and implement. Finally, the computational complexity of the proposed algorithm is analysed and compared with the existing methods.  相似文献   

4.
In this paper, we model embedded system design and optimization, considering component redundancy and uncertainty in the component reliability estimates. The systems being studied consist of software embedded in associated hardware components. Very often, component reliability values are not known exactly. Therefore, for reliability analysis studies and system optimization, it is meaningful to consider component reliability estimates as random variables with associated estimation uncertainty. In this new research, the system design process is formulated as a multiple-objective optimization problem to maximize an estimate of system reliability, and also, to minimize the variance of the reliability estimate. The two objectives are combined by penalizing the variance for prospective solutions. The two most common fault-tolerant embedded system architectures, N-Version Programming and Recovery Block, are considered as strategies to improve system reliability by providing system redundancy. Four distinct models are presented to demonstrate the proposed optimization techniques with or without redundancy. For many design problems, multiple functionally equivalent software versions have failure correlation even if they have been independently developed. The failure correlation may result from faults in the software specification, faults from a voting algorithm, and/or related faults from any two software versions. Our approach considers this correlation in formulating practical optimization models. Genetic algorithms with a dynamic penalty function are applied in solving this optimization problem, and reasonable and interesting results are obtained and discussed.  相似文献   

5.
Often, the objectives in a computational analysis involve characterization of system performance based on some function of the computed response. In general, this characterization includes (at least) an estimate or prediction for some performance measure and an estimate of the associated uncertainty. Surrogate models can be used to approximate the response in regions where simulations were not performed. For most surrogate modeling approaches, however, (1) estimates are based on smoothing of available data and (2) uncertainty in the response is specified in a point-wise (in the input space) fashion. These aspects of the surrogate model construction might limit their capabilities.One alternative is to construct a probability measure, G(r), for the computer response, r, based on available data. This “response-modeling” approach will permit probability estimation for an arbitrary event, E(r), based on the computer response. In this general setting, event probabilities can be computed: prob(E)=∫rI(E(r))dG(r) where I is the indicator function. Furthermore, one can use G(r) to calculate an induced distribution on a performance measure, pm. For prediction problems where the performance measure is a scalar, its distribution Fpm is determined by: Fpm(z)=∫rI(pm(r)z)dG(r). We introduce response models for scalar computer output and then generalize the approach to more complicated responses that utilize multiple response models.  相似文献   

6.
In the present paper, an efficient algorithm for connectivity analysis of moderately sized distribution networks has been suggested. Algorithm is based on generation of all possible minimal system cutsets. The algorithm is efficient as it identifies only the necessary and sufficient conditions of system failure conditions in n-out-of-n type of distribution networks. The proposed algorithm is demonstrated with the help of saturated and unsaturated distribution networks. The computational efficiency of the algorithm is justified by comparing the computational efforts with the previously suggested appended spanning tree (AST) algorithm. The proposed technique has the added advantage as it can be utilized for generation of system inequalities which is useful in reliability estimation of capacitated networks.  相似文献   

7.
Managing failure dependence of complex systems with hybrid uncertainty is one of the hot problems in reliability assessment. Epistemic uncertainty is attributed to complex working environment, system structure, human factors, imperfect knowledge, etc. Probability-box has powerful characteristics for uncertainty analysis and can be effectively adopted to represent epistemic uncertainty. However, arithmetic rules on probability-box structures are mostly used among structures representing independent random variables. In most practical engineering applications, failure dependence is always introduced in system reliability analysis. Therefore, this paper proposes a developed Bayesian network combining copula method with probability-box for system reliability assessment. There are four main steps involved in the reliability computation process: marginal distribution identification and estimation, copula function selection and parameter estimation, reliability analysis of components with correlations and Bayesian forward analysis. The benefits derived from the proposed approach are used to overcome the computational limitations of n-dimensional integral operation, and the advantages of useful properties of copula function in reliability analysis of systems with correlations are adopted. To demonstrate the effectiveness of the developed Bayesian network, the proposed method is applied to a real large piston compressor.  相似文献   

8.
Reliability and fault-tolerance issues are important in the study of interconnection networks used in large multiprocessor systems because of the large number of components involved. In this paper we study these issues with respect to multistage networks which are typically built forN inputs andN outputs using 2 × 2 switching elements and log2 N stages. In such networks, the failure of a switching element or connecting link destroys the communication capability between one or more pair(s) of source and destination terminals. Many techniques exist for designing multistage networks that tolerate switch and/or link failures without losing connectivity. Several approaches for achieving fault-tolerance in multistage interconnection networks are described in this paper. The techniques vary from providing redundant components in the network to making multiple passes through the faulty network. Quantitative measures are introduced for analysis of the reliability of these networks in terms of the component reliabilities. Several examples are given to illustrate the techniques. This research is supported by thensf Presidential Young Investigator Award No.dci-8452003, a grant from AT&T Information Systems, and a grant fromtrw.  相似文献   

9.
For single‐use non‐repairable systems, reliability is commonly estimated as a function of age and usage. For the effective management of individual systems or populations of systems, it is frequently important and necessary to predict the reliability in the future for age and usage values not yet observed. When predicting future system reliability, the age of the future system is easily predicted whereas future usage values will typically be unknown. In this paper we present the methodology for how to estimate both individual and population reliability summaries based on the currently known age and usage values. Projected usage values for future points in time can be obtained based on observed usage patterns or user‐specified patterns of usage rates. Individual system summaries can be used to answer the questions ‘For a given system of age A and usage U, what is its reliability with associated uncertainty?’ or ‘For a given system with known current age A and usage U, but unknown usage in the future, what is its reliability with associated uncertainty?’ The population summary of interest predicts the probability that a system randomly selected from the population of systems works. This summary takes into consideration the estimation of future usage, the estimated probability of individual systems working at their given ages and usage values, and the life cycle demographics of the population of interest. In this paper we discuss these questions for a given application. Published in 2010 by John Wiley & Sons, Ltd.  相似文献   

10.
This paper seeks to define the concept of resiliency as a component importance measure related to network reliability. Resiliency can be defined as a composite of: (1) the ability of a network to provide service despite external failures and (2) the time to restore service when in the presence of such failures. Although, Resiliency has been extensively studied in different research areas, this paper will study the specific aspects of quantifiable network resiliency when the network is experiencing potential catastrophic failures from external events and/or influences, and when it is not known a priori which specific components within the network will fail. A formal definition for Category I resiliency is proposed and a step-by-step approach based on Monte-Carlo simulation to calculate it is defined. To illustrate the approach, two-terminal networks with varying degrees of redundancy, have been considered. The results obtained for test networks show that this new quantifiable concept of resiliency provides insight into the performance and topology of the network. Future use for this work could include methods for safeguarding critical network components and optimizing the use of redundancy as a technique to improve network resiliency.  相似文献   

11.
This paper deals with large sample estimation of the location parameter (α1 and the scale parameter α2 in the gamma distribution with known shape parameter. Best linear unbiased estimates based on k sample quantiles are used. For a given k, the optimum spacings of the sample quantiles can be replaced by simpler “nearly optimum” spacings at virtually no loss of asymptotic efficiency. The theory behind the nearly optimum spacings is briefly reviewed. The major part of the paper concerns estimation of α2 when α2 is known. Nearly optimum spacings together with the coefficients to be used in computing the estimates are presented in a number of tables for k = 1(1) 10, and various values of the shape parameter. The paper also contains brief discussions of estimation of α1, when α2 is known, and simultaneous estimation of α1 and α2.  相似文献   

12.
This paper describes a Monte-Carlo (MC) simulation methodology for estimating the reliability of a multi-state network. The problem under consideration involves multi-state two-terminal reliability (M2TR) computation. Previous approaches have relied on enumeration or on the computation of multi-state minimal cut vectors (MMCV) and the application of inclusion/exclusion formulae. This paper discusses issues related to the reliability calculation process based on MMCV. For large systems with even a relatively small number of component states, reliability computation can become prohibitive or inaccurate using current methods. The major focus of this paper is to present and compare a new MC simulation approach that obtains accurate approximations to the actual M2TR. The methodology uses MC to generate system state vectors. Once a vector is obtained, it is compared to the set of MMCV to determine whether the capacity of the vector satisfies the required demand. Examples are used to illustrate and validate the methodology. The estimates of the simulation approach are compared to exact and approximation procedures from solution quality and computational effort perspectives. Results obtained from the simulation approach show that for relatively large networks, the maximum absolute relative error between the simulation and the actual M2TR is less than 0.9%, yet when considering approximation formulae, this error can be as large as 18.97%. Finally, the paper discusses that the MC approach consistently yields accurate results while the accuracy of the bounding methodologies can be dependant on components that have considerable impact on the system design.  相似文献   

13.
Computer networks and power transmission networks are treated as capacitated flow networks. A capacitated flow network may partially fail due to maintenance. Therefore, the capacity of each edge should be optimally assigned to face critical situations—i.e., to keep the network functioning normally in the case of failure at one or more edges. The robust design problem (RDP) in a capacitated flow network is to search for the minimum capacity assignment of each edge such that the network still survived even under the edge’s failure. The RDP is known as NP-hard. Thus, capacity assignment problem subject to system reliability and total capacity constraints is studied in this paper. The problem is formulated mathematically, and a genetic algorithm is proposed to determine the optimal solution. The optimal solution found by the proposed algorithm is characterized by maximum reliability and minimum total capacity. Some numerical examples are presented to illustrate the efficiency of the proposed approach.  相似文献   

14.
A new algorithm is proposed to approximate the terminal-pair network reliability based on minimal cut theory. Unlike many existing models that decompose the network into a series–parallel or parallel–series structure based on minimal cuts or minimal paths, the new model estimates the reliability by summing the linear and quadratic unreliability of each minimal cut set. Given component test data, the new model provides tight moment bounds for the network reliability estimate. Those moment bounds can be used to quantify the network estimation uncertainty propagating from component level estimates. Simulations and numerical examples show that the new model generally outperforms Esary-Proschan and Edge-Packing bounds, especially for high reliability systems.  相似文献   

15.
Shuffle-exchange networks (SENs) have been widely considered as practical interconnection systems due to their size of its switching elements (SEs) and uncomplicated configuration. SEN is a network among a large class of topologically equivalent multistage interconnection networks (MINs) that includes omega, indirect binary n-cube, baseline, and generalized cube. In this paper, SEN with additional stages that provide more redundant paths are analyzed. A common network topology with a 2×2 basic building block in a SEN and its variants in terms of extra-stages is investigated. As an illustration, three types of SENs are compared: SEN, SEN with an additional stage (SEN+), and SEN with two additional stages (SEN+2). Finally, three measures of reliability: terminal, broadcast, and network reliability for the three SEN systems are analyzed.  相似文献   

16.
Azeotropy in the natural and synthetic refrigerant mixtures   总被引:1,自引:1,他引:0  
A novel approach for the prediction of azeotrope formation in a mixture that does not require vapour–liquid equilibrium calculations is developed. The method employs neural networks and global phase diagram methodologies to correlate azeotropic data for binary mixtures based only on critical properties and acentric factor of the individual components in refrigerant mixtures. Analytical expressions to predict azeotropy and double azeotropy phenomena in terms of critical parameters of pure components and interaction parameters k12, are derived using global phase diagram conception. Modeling of thermodynamic and phase behavior has been carried out on the base of the Redlich–Kwong–Soave and the Peng–Robinson equations of state (EoS). Local mapping method is introduced to describe thermodynamically consistently an accurate saturation curve of refrigerants by three parameters EoS. Optimized neural network was chosen to achieve a complete coincidence of predicted and experimentally observable azeotropic states for training, validation, and test sets simultaneously. All possible cases of azeotropy appearance/absence in the more than 1500 industrially significant binary mixtures of natural and synthetic refrigerants are presented.  相似文献   

17.
Molar heat capacities at constant volume C v were measured for binary refrigerant mixtures with an adiabatic calorimeter with gravimetric determinations of the amount of substance. Temperatures ranged from 200 to 345 K, while pressures extended up to 35 MPa. Measurements were conducted on liquid samples with equimolar compositions for the following binary systems: R32/R134a, R32/R125, R125/R134a, and R125/R143a. The uncertainty is 0.002 K for the temperature rise and is 0.2% for the change-of-volume work, which is the principal source of uncertainty. The expanded relative uncertainty (with a coverage factor k=2 and thus a two-standard deviation estimate) for C v is estimated to be 0.7%.  相似文献   

18.
With the aid of a hydrodynamic model for semiconductor plasmas, an analytical investigation of coherent Brillouin scattering (CBS) has been made in noncentrosymmetric (NCS) semiconductor plasmas both under the presence and in the absence of an externally applied magnetic field. Using the coupled mode approach, the nonlinear induced polarization and third-order nonlinear optical susceptibility, due to bound and free charge carrier nonlinearity, are obtained. The analysis further deals with the qualitative behavior of the threshold pump electric field E T for onset of CBS and the resulting gain coefficient (steady-state as well as transient, [g B]SS,TR) in NCS semiconductor plasmas. Numerical estimates are made for InSb crystal at 77 K duly irradiated by a pulsed 10.6 μm CO2 laser. The effects of piezoelectricity, doping concentration and magnetic field on both the E T and [g B]SS,TR have been studied in detail. E T required for onset of the CBS process is found to be lower when piezoelectricity is present and the doping level of the semiconductor is moderate than in other conditions. It is found that when the magnetic field is applied, the coherent backward Stokes wave can be amplified by a factor of 10² in NCS semiconducting crystals. The analysis also suggests the idea of pulse compression and the possibility of observing the phase conjugation reflection coefficient ~106 which proves its potential for the fabrication of CBS-based phase conjugate mirrors.  相似文献   

19.
In transport networks, human beings are moving objects whose moving direction is stochastic in emergency situations. Based on this idea, a new model—stochastic moving network (SMN) is proposed. It is different from binary-state networks and stochastic-flow networks. The flow of SMNs has multiple-saturated states, that correspond to different flow values in each arc. In this paper, we try to evaluate the system reliability, defined as the probability that the saturated flow of the network is not less than a given demand d. Based on this new model, we obtain the flow probability distribution of every arc by simulation. An algorithm based on the blocking cutset of the SMN is proposed to evaluate the network reliability. An example is used to show how to calculate the corresponding reliabilities for different given demands of the SMN. Simulation experiments of different size were made and the system reliability precision was calculated. The precision of simulation results also discussed.  相似文献   

20.
Abstract

This paper is mainly concerned with the problem of distributing a data base (i.e., a set of segments) in a computer network system so as to facilitate parallel searching. In our distributed data base model, we assume that all segments are stored in nodes. Each time a query occurs, all nodes are searched concurrently. For convenience, we define the time required to access a segment from any node as a time unit. For a network with d nodes, the response time of a query is then identical to the maximum (n 1 , n 2, …, nd ), where ni , is the number of segments that satisfies the query and is stored in node i. Unfortunately, the solution for finding an optimal way to organize a distributed data base for parallel searching is still at large. In other words, given a data base, there is no efficient polynomial time algorithm for finding an optimal arrangement of segments onto nodes. In this article, we shall present a “heuristic algorithm” based upon a multivariant analysis method in statistics to distribute a data base in a network system. Some experimental results will show that our method is indeed feasible and effective.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号