首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Being asked to write a paper about Boris Vladimirovich Gnedenko, we decided to present the material in two parts: Ekaterina Gnedenko writes biographical notes from the angle of his research achievements and Igor Ushakov writes rather personal reminiscence. Gnedenko himself told about his life in the book of his memoirs [1].  相似文献   

2.
It is often expensive to estimate the failure probability of highly reliable systems by Monte Carlo simulation. Subset Simulation breaks the original problem of estimating a small probability into the estimation of a sequence of large conditional probabilities, which is more efficient. The conditional probabilities are estimated by Markov Chain simulation. Uncertainty in the power spectral density of the excitation makes it necessary to re-evaluate the reliability for many power spectral densities that are consistent with the evidence about the system excitation. Subset Simulation is more efficient than Monte Carlo simulation, but still requires a new simulation for each admissible power spectral density. This paper presents an efficient method to re-evaluate the reliability of a dynamic system under stationary Gaussian stochastic excitation for different load spectra. We accomplish that by re-weighting the results of a single Subset Simulation. This method is applicable to both linear and nonlinear systems provided that all of the spectra contain the same amount of energy. The authors are currently working on an extension of the method to nonlinear systems, even when the sampling and true power spectral density functions contain different amounts of energy.  相似文献   

3.
Shortest distance and reliability of probabilistic networks   总被引:1,自引:0,他引:1  
When the “length” of a link is not deterministic and is governed by a stochastic process, the “shortest” path between two points in the network is not necessarily always composed of the same links and depends on the state of the network. For example, in communication and transportation networks, the travel time on a link is not deterministic and the fastest path between two points is not fixed. This paper presents an algorithm to compute the expected shortest travel time between two nodes in the network when the travel time on each link has a given independent discrete probability distribution. The algorithm assumes the knowledge of all the paths between two nodes and methods to determine the paths are referenced.In reliability (i.e. the probability that two given points are connected by a path) computations, associated with each link is a probability of “failure” and a probability of “success”. Since “failure” implies infinite travel time, the algorithm simultaneously computes reliability. The paper also discusses the algorithm's capability to simultaneously compute some other performance measures which are useful in the analysis of emergency services operating on a network.  相似文献   

4.
Information hiding is a general concept which refers to the goal of preventing an adversary from inferring secret information from the observables. Anonymity and Information Flow are examples of this notion. We study the problem of information hiding in systems characterized by the coexistence of randomization and concurrency. It is well known that the presence of nondeterminism, due to the possible interleavings and interactions of the parallel components, can cause unintended information leaks. The most established approach to solve this problem is to fix the strategy of the scheduler beforehand. In this work, we propose a milder restriction on the schedulers, and we define the notion of strong (probabilistic) information hiding under various notions of observables. Furthermore, we propose a method, based on the notion of automorphism, to verify that a system satisfies the property of strong information hiding, namely strong anonymity or non-interference, depending on the context. Through the paper, we use the canonical example of the Dining Cryptographers to illustrate our ideas and techniques.  相似文献   

5.
6.
This paper presents a new hybrid reliability model which contains randomness, fuzziness and non-probabilistic uncertainty based on the structural fuzzy random reliability and non-probabilistic set-based models. By solving the non-probabilistic set-based reliability problem and analyzing the reliability with fuzziness and randomness, the structural hybrid reliability can be obtained. The presented hybrid model has broad applicability which can handle either linear or non-linear state functions. A comparison among the presented hybrid model, probabilistic and non-probabilistic models, and the conventional probabilistic model is made through two typical numerical examples. The results show that the presented hybrid model, which may ensure structural security, is effective and practical.  相似文献   

7.
Sequential model criticism in probabilistic expert systems   总被引:1,自引:0,他引:1  
Probabilistic expert systems based on Bayesian networks require initial specification of both qualitative graphical structure and quantitative conditional probability assessments. As (possibly incomplete) data accumulate on real cases, the parameters of the system may adapt, but it is also essential that the initial specifications be monitored with respect to their predictive performance. A range of monitors based on standardized scoring rules that are designed to detect both qualitative and quantitative departures from the specified model is presented. A simulation study demonstrates the efficacy of these monitors at uncovering such departures  相似文献   

8.
Formal notations like B or action systems support a notion of refinement. Refinement relates an abstract specification A to a concrete specification C that is as least as deterministic. Knowing A and C one proves that C refines, or implements, specification A. In this study we consider specification A as given and concern ourselves with a way to find a good candidate for implementation C. To this end we classify all implementations of an abstract specification according to their performance. We distinguish performance from correctness. Concrete systems that do not meet the abstract specification correctly are excluded. Only the remaining correct implementations C are considered with respect to their performance. A good implementation of a specification is identified by having some optimal behaviour in common with it. In other words, a good refinement corresponds to a reduction of non-optimal behaviour. This also means that the abstract specification sets a boundary for the performance of any implementation. We introduce the probabilistic action system formalism which combines refinement with performance. In our current study we measure performance in terms of long-run expected average-cost. Performance is expressed by means of probability and expected costs. Probability is needed to express uncertainty present in physical environments. Expected costs express physical or abstract quantities that describe a system. They encode the performance objective. The behaviour of probabilistic action systems is described by traces of expected costs. A corresponding notion of refinement and simulation-based proof rules are introduced. Probabilistic action systems are based on discrete-time Markov decision processes. Numerical methods solving the optimisation problems posed by Markov decision processes are well-known, and used in a software tool that we have developed. The tool computes an optimal behaviour of a specification A thus assisting in the search for a good implementation C.Received September 2002 Accepted in revised form January 2004 by E.C.R. Hehner  相似文献   

9.
Early in a program, engineers must determine requirements for system reliability and availability. We suggest that existing techniques gathered from diverse fields can be incorporated within the framework of systems engineering methodology to accomplish this. Specifically, adopting probabilistic (Monte Carlo) design techniques allows the designer to incorporate uncertainty explicitly into the design process and to improve the designer's understanding of the root causes of failures and how often these might realistically occur. In high‐reliability systems in which failure occurs infrequently, rare‐event simulation techniques can reduce the computational burden of achieving this understanding. This paper provides an introductory survey of the literature on systems engineering, requirements engineering, Monte Carlo simulation, probabilistic design, and rare‐event simulation with the aim of assessing the degree to which these have been integrated in systems design for reliability. This leads naturally to a proposed framework for the fusion of these techniques.  相似文献   

10.
11.
A model is proposed that describes the consideration of the control of the reliability in renewable systems such that the consideration is applied to determine the moment a system’s failure happens and its repair should take place. The presented model differs from the known models in the fact that the magnitude of the probability of the failure detection is multiplied by the magnitude of the leading function among the functions of the repair probability. The leading function is called the repair resource. Thus, the magnitude of the probability of the correct control determines the magnitude of the repair resource. This means that it also controls the magnitude of the regeneration period for a random process of the repaired system.  相似文献   

12.
13.
14.
Modeling the generation of a wind farm and its effect on power system reliability is a challenging task, largely due to the random behavior of the output power. In this paper, we propose a new probabilistic model for assessing the reliability of wind farms in a power system at hierarchical level II (HLII), using a Monte Carlo simulation. The proposed model shows the effect of correlation between wind and load on reliability calculation. It can also be used for identifying the priority of various points of the network for installing new wind farms, to promote the reliability of the whole system. A simple grid at hierarchical level I (HLI) and a network in the north-eastern region of Iran are studied. Simulation results showed that the correlation between wind and load significantly affects the reliability.  相似文献   

15.
Many protocols are designed to operate correctly even in the case where the underlying communication medium is faulty. To capture the behavior of such protocols, Lossy Channel Systems (LCS’s) have been proposed. In an LCS the communication channels are modeled as unbounded FIFO buffers which are unreliable in the sense that they can nondeterministically lose messages. Recently, several attempts have been made to study Probabilistic Lossy Channel Systems (PLCS’s) in which the probability of losing messages is taken into account. In this article, we consider a variant of PLCS’s which is more realistic than those studied previously. More precisely, we assume that during each step in the execution of the system, each message may be lost with a certain predefined probability. We show that for such systems the following model-checking problem is decidable: to verify whether a linear-time property definable by a finite-state ω-automaton holds with probability one. We also consider other types of faulty behavior, such as corruption and duplication of messages, and insertion of new messages, and show that the decidability results extend to these models.  相似文献   

16.
17.
18.
The theory and design of a special purpose stochastic computer for the high-speed simulation of Markov chains and random walks is described. Experimental results are presented for the transient and steady response to Markov systems and for fundamental studies of random walks. The paper conlcludes with a discussion of the extension of the system to the Monte-Carlo solution of partial differential equations with arbitrary boundary conditions.  相似文献   

19.
Supervisory control theory for discrete event systems, introduced by Ramadge and Wonham, is based on a non-probabilistic formal language framework. However, models for physical processes inherently involve modelling errors and noise-corrupted observations, implying that any practical finite-state approximation would require consideration of event occurrence probabilities. Building on the concept of signed real measure of regular languages, this paper formulates a comprehensive theory for optimal control of finite-state probabilistic processes. It is shown that the resulting discrete-event supervisor is optimal in the sense of elementwise maximizing the renormalized langauge measure vector for the controlled plant behaviour and is efficiently computable. The theoretical results are validated through several examples including the simulation of an engineering problem.  相似文献   

20.
One of the problems facing the builders of the `Information Superhighway' is how to charge for services. The high costs of billing systems suggest that prepayment mechanisms could play a large part in the solution. Yet how does one go about making an electronic prepayment system (or indeed any kind of payment system) robust? We describe some recent systems engineering experience which may be relevant-the successful introduction of cryptology to protect prepayment electricity meters from token fraud. These meters are used by a number of utilities from Scotland to South Africa, and they present some interesting reliability challenges  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号