首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Recent transit applications of computer communications systems clearly demonstrate that operating deficits can be significantly reduced, and increased passenger volumes, caused by the energy situation, can be accommodated cost effectively. The approach being taken by the Toronto Transit Commission and Mississauga Transit to develop comprehensive transit communications and information systems is described. Such systems address the service information and management needs of all major on-line and off-line operating functions. The passenger and operating benefits that have been achieved to date are outlined, and the potential of such systems for improving transit ridership and vehicle fleet utilization and for meeting the reporting requirements of government are discussed.  相似文献   

2.
3.
The reliability of a small vehicle has been studied to determine its usage against the required maintenance actions. Up to 60,000 km usage has been examined.  相似文献   

4.
The problem of estimating the reliability of a decaying system is considered. Several models are developed for providing reliability estimates when attribute (success, failure) sampling data over m stages indicates decreasing system reliability. An ordered maximum likelihood estimate model (MLE) is developed which provides decaying system reliability estimates under reliability order restrictions. A second classical model is an adaptation of the standard cumulative maximum likelihood estimate (mle) model. The model uses a weighting factor such that more current test information is weighted more heavily than is earlier data.Computer simulation trials were then conducted to compare the performance of the models with the standard cumulative MLE and binomial MLE models. The ordered MLE model is shown to yield the best results, performing even better than the binomial MLE model. The ordered MLE model produces generally conservative estimates of system reliability. The ordered MLE model also produces less error in its estimates than the binomial MLE model, both in terms of root mean squared error and standard deviation. Larger sample sizes are shown to yield the best results for both the binomial and ordered MLE models. While the ordered MLE model outperforms the binomial MLE model at every stage for sample sizes of 20 and 40, it is shown that the ordered MLE model offers significant improvement over the binomial MLE model for sample sizes of twenty.  相似文献   

5.
6.
The present paper deals with some mathematical techniques which are now available for the evaluation of the marginal reliability of a system. In particular the characteristics of both exact methods and approximate ones are examined including, among these last ones, also methods of more recent use and recent extensions of know ones. Moreover, some among the most efficient numerical methods are discussed. Some examples are reported during the exposition in order to better illustraté the different features and, when possible, to compare the techniques  相似文献   

7.
This paper presents two mathematical Markov models to predict human reliability of time continuous operation tasks. Laplace transforms of the state probability equations are developed.  相似文献   

8.
Conventional (i.e. probist) reliability theory is based on the probability assumption and the binary-state assumption, whereas profust reliability theory is based on the probability assumption and the fuzzy-state assumption. The mixture model is an important form of typical systems in profust reliability theory. In this paper we show that a mixture model can be converted into a series system, and study the IFR and DFR preservabilities for mixture models. Various profust reliability bounds are given for the mixture model. In addition, we also show how to construct typical profust lifetime distributions for mixture models.  相似文献   

9.
The concept of relevancy for components in multistate reliability models is the main condition for the underlying models to be coherent. The authors review existing component relevancy conditions and their structure functions, investigate their interrelationships and introduce a unified relevancy condition. They discuss the mathematical formulation of these relevancy conditions and their interrelationships. They introduce a unified criterion for defining the relevancy conditions shown to be special cases of the unified form. The effect of these relevancy conditions on the well known principle used by design engineers, that redundancy at the component level is preferable to redundancy at the system level is studied. Some counter-examples for the inverse implications between relevancies are given  相似文献   

10.
Failure correlation in software reliability models   总被引:4,自引:0,他引:4  
Perhaps the most stringent restriction in most software reliability models is the assumption of statistical independence among successive software failures. The authors research was motivated by the fact that although there are practical situations in which this assumption could be easily violated, much of the published literature on software reliability modeling does not seriously address this issue. The research work in this paper is devoted to developing the software reliability modeling framework that can consider the phenomena of failure correlation and to study its effects on the software reliability measures. The important property of the developed Markov renewal modeling approach is its flexibility. It allows construction of the software reliability model in both discrete time and continuous time, and (depending on the goals) to base the analysis either on Markov chain theory or on renewal process theory. Thus, their modeling approach is an important step toward more consistent and realistic modeling of software reliability. It can be related to existing software reliability growth models. Many input-domain and time-domain models can be derived as special cases under the assumption of failure s-independence. This paper aims at showing that the classical software reliability theory can be extended to consider a sequence of possibly s-dependent software runs, viz, failure correlation. It does not deal with inference nor with predictions, per se. For the model to be fully specified and applied to estimations and predictions in real software development projects, we need to address many research issues, e.g., the detailed assumptions about the nature of the overall reliability growth, way modeling-parameters change as a result of the fault-removal attempts  相似文献   

11.
In this paper we review several software reliability models and provide an extensive listing of papers dealing with software reliability modelling and their applications. The models discussed are grouped into two broad categories: empirical and analytical models. Analytical models are further subdivided into static and dynamic models, and the general theory behind these models are reviewed. Based on the observations made in this review, we provide suggestions for future research.  相似文献   

12.
In the standard Markov technique applied to reliability analysis, components are characterized by two states: an up state and a down state. The present paper explores the possibility of studying system reliability, by modelling each component with a multi-state homogeneous Markov model (MHMM). It is shown that this approach is of value both in approximating non-exponential probability distributions and in helping to build up suitable models for physical processes. Examples are presented which illustrate how the multi-state technique fits many practical situations. Finally some open problems on this topic are suggested.  相似文献   

13.
Reliability models for multistate coherent systems require customer interaction. The customer defines the distinctive component and system states. Knowing the component states, the authors estimate the probability distribution for each component. The customer specifies when a change in the state of any component forces a change in the state of the system. From this, the authors present a methodology to develop the customer's structure function and to calculate the probability of each system state. The customer defines and evaluates reliability. Using the customer's definition, one can summarize the probability distribution of the system  相似文献   

14.
This paper presents a complete system for analyzing a vehicle׳s behavior in the context of real-time traffic video surveillance applications. To obtain the best possible results, it is fundamental to exploit the scene characteristics and the predefined traffic rules. For that purpose, an initial training step is performed that involves estimating the geometrical structure of the road, i.e., the depth relative to the camera, the vanishing points, the road areas itself, the road decomposition (into normal and forbidden traffic lanes or areas), the traffic rules, the typical vehicle trajectories and speeds, and the lane-changing rules. This process leads to a scene model, which is used together with a simple vehicle geometrical model during the vehicle detection, tracking and trajectory estimation phases to improve the robustness against the perspective and occlusion effects. Shadow effects are also accounted for during the moving object detection phase. Finally, this spatio-temporal analysis is used to obtain information that concerns the vehicles׳ behaviors. Experiments show that the information obtained is reliable and can be computed in real time.  相似文献   

15.
Acyclic Markov chains are frequently used for reliability analysis of nonmaintained mission-critical computer-based systems. Since traditional sensitivity (or importance) analysis using Markov chains can be computationally expensive, an approximate approach is presented which is easy to compute and which performs quite well in test cases. This approach is presented in terms of a Markov chain which is used for solving a dynamic fault-tree, but the approach applies to any acyclic Markov reliability model.  相似文献   

16.
Monte Carlo techniques for estimating various network reliability characteristics, including terminal connectivity, are developed by assuming that edges are subject to failures with arbitrary probabilities and nodes are absolutely reliable. The core of the approach is introducing network time-evolution processes and using certain graph-theoretic machinery, resulting in a considerable increase in accuracy for Monte Carlo estimates, especially for highly reliable networks. Simulation strategies and numerical results are presented and discussed  相似文献   

17.
Reliability issues are important during the design of VLSI integrated circuits built on silicon, due to several design constraints-higher performance and frequency, device miniaturization, higher levels of on-chip integration-that must be satisfied by the final product. Digital designs are usually subject to failures due to the increased operating temperature caused by their high power dissipation. This paper addresses the problem of analyzing the reliability with respect to power consumption of digital systems constructed with CMOS technology. The solution is simulation-based, and relies on a new, cellular automaton-based model which is particularly suitable for identifying the power characteristics of a sequential design. The model is discussed in detail; it provides a homogeneous representation of all the components of the circuit. Primary inputs, flip-flops, primary outputs, and their related cones of combinational logic are modeled in the same way by means of cellular automaton cells. The model is used to analyze reliability of sequential VLSI circuits. To prove the applicability of the model, we report experimental results on some standard benchmarks taken from the literature  相似文献   

18.
A reliability measure that takes into account routing and rerouting policies after failures as well as the capacity of the network to satisfy its demand is proposed. The measure, based on the evaluation of the lost call traffic, needs resolution of a routing model for the states of perfect functioning as well as the most probable failure states. A type of routing model useful for network planning is also proposed. The model is closer to reality and easier to implement than the other classical multicommodity formulations. A convex-simplex implementation with a reoptimization feature explicitly adapted to the proposed model is used  相似文献   

19.
Some useful reliability growth models, which have simple graphical interpretations, are studied in this paper. The proposed models are inspired by the Duane model. For each of the models, the plot of the cumulative number of failures against the running time, when a suitable scale is used, will tend to be on a straight line if the model is valid. Otherwise the model should be rejected. The slope of the fitted line and its intercept on the vertical axis will give us the estimates of the parameters. Hence, it provides us with a simple graphical model validation and parameter estimation tool. In particular, we propose a “first-model-validation-then-parameter-estimation” approach which will simplify the model validation and parameter estimation problem in software reliability analysis. Numerical analysis of several sets of software failure data are also provided to enlighten the ideas.  相似文献   

20.
The reliability model developed by Christer is used in which the notion of delay-time is involved, which is the span of time from when a defect is first detected upon inspection to when it is considered to have failed. In this paper, a new simplified formula for the reliability function is obtained, in which the delay time and the time to failure densities are exponential distributions. The MLE of the reliability function are also obtained. A numerical procedure is used to solve the results for the new model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号