首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 523 毫秒
1.
This work demonstrates the feasibility of reliability modeling of systems with repair capability using a semi-Markov process. A two-unit system with exponential failure times but general repair times is studied. Formulas for state-transition probabilities, waiting-time distribution functions, and mean time in each state are developed. These quantities are expressed in terms of the Laplace transform of repair time distribution functions. Once these quantities are known, mean time to system failure and system availability, as well as other system parameters, can be found using matrix manipulations. In addition, time-dependent results may be obtained. A numerical example varying the parameter in a repair-time law is presented. The formulas developed can be extended to larger systems with repair capability for only one unit at a time and exponential failure times.  相似文献   

2.
Soft errors due to cosmic particles are a growing reliability threat for VLSI systems. The vulnerability of FPGA-based designs to soft errors is higher than ASIC implementations since the majority of chip real estate is dedicated to memory bits, configuration bits, and user bits. Moreover, Single Event Upsets (SEUs) in the configuration bits of SRAM-based FPGAs result in permanent errors in the mapped design.FPGAs are widely used in the implementation of high performance information systems. Since the reliability requirements of these high performance information sub-systems are very stringent, the reliability of the FPGA chips used in the design of such systems plays a critical role in the overall system reliability. In this paper, we compare and validate the soft error rate of FPGA-based designs used in the Logical Unit Module board of a commercial information system with the field error rates obtained from actual field failure data. This comparison confirms that our analytical tool is very accurate (there is an 81% overlap in FIT rate range obtained with our analytical modeling framework and the field failure data studied). It can be used for identifying vulnerable modules within the FPGA for cost-effective reliability improvement.  相似文献   

3.
Recently, considerable attention has been devoted to modeling and analysis of the behavior of the ranging error in indoor environment. The ranging error modeling is essential in design of precise time of arrival (TOA) based indoor geolocation systems. In this paper we present a new framework for simulation of the dynamic spatial variations of ranging error observed by a mobile user based on an application of Markov model. The model relegates the behavior of ranging error into four main categories associated with four states of the Markov process. The parameters of the model are extracted from empirical data collected from a measurement calibrated ray tracing (RT) algorithm in a typical office environment. Results of simulated errors from Markov model and actual errors from empirical data show close agreement.  相似文献   

4.
Two broad categories of human error occur during software development: (1) development errors made during requirements analysis, design, and coding activities; (2) debugging errors made during attempts to remove faults identified during software inspections and dynamic testing. This paper describes a stochastic model that relates the software failure intensity function to development and debugging error occurrence throughout all software life-cycle phases. Software failure intensity is related to development and debugging errors because data on development and debugging errors are available early in the software life-cycle and can be used to create early predictions of software reliability. Software reliability then becomes a variable which can be controlled up front, viz, as early as possible in the software development life-cycle. The model parameters were derived based on data reported in the open literature. A procedure to account for the impact of influencing factors (e.g., experience, schedule pressure) on the parameters of this stochastic model is suggested. This procedure is based on the success likelihood methodology (SLIM). The stochastic model is then used to study the introduction and removal of faults and to calculate the consequent failure intensity value of a small-software developed using a waterfall software development  相似文献   

5.
This paper presents an approach for performing reliability analysis of bridge and parallel-series networks with critical and non-critical human errors. Reliability and mean time to failure formulas are developed for exponential and Rayleigh distributed failure times. Selective plots are shown for demonstrating the effect of human errors on system reliability and mean time to failure. Equations for estimating a number of units in series, parallel, series-parallel and parallel-series networks are presented.  相似文献   

6.
Reliability Modeling Using SHARPE   总被引:1,自引:0,他引:1  
Combinatorial models such as fault trees and reliability block diagrams are efficient for model specification and often efficient in their evaluation. But it is difficult, if not impossible, to allow for dependencies (such as repair dependency and near-coincident-fault type dependency), transient and intermittent faults, standby systems with warm spares, and so on. Markov models can capture such important system behavior, but the size of a Markov model can grow exponentially with the number of components in this system. This paper presents an approach for avoiding the large state space problem. The approach uses a hierarchical modeling technique for analyzing complex reliability models. It allows the flexibility of Markov models where necessary and retains the efficiency of combinatorial solution where possible. Based on this approach a computer program called SHARPE (Symbolic Hierarchical Automated Reliability and Performance Evaluator) has been written. The hierarchical modeling technique provides a very flexible mechanism for using decomposition and aggregation to model large systems; it allows for both combinatorial and Markov or semi-Markov submodels, and can analyze each model to produce a distribution function. The choice of the number of levels of models and the model types at each level is left up to the modeler. Component distribution functions can be any exponential polynomial whose range is between zero and one. Examples show how combinations of models can be used to evaluate the reliability and availability of large systems using SHARPE.  相似文献   

7.
Geometrical Error Modeling and Compensation Using Neural Networks   总被引:1,自引:0,他引:1  
This paper describes an approach based on neural networks (NNs) for geometrical error modeling and compensation for precision motion systems. A laser interferometer is used to obtain the systematic error measurements of the geometrical errors, based on which an error model may be constructed and, consequently, a model-based compensation may be incorporated in the motion-control system. NNs are used to approximate the components of geometrical errors, thus dispensing with the conventional lookup table. Apart from serving as a more adequate model due to its inherent nonlinear characteristics, the use of NNs also results in less memory requirements to implement the error compensation for a specified precision compared to the use of lookup table. The adequacy and clear benefits of the proposed approach are illustrated via applications to various configurations of precision-positioning stages, including a single-axis, a gantry, and a complete XY stage  相似文献   

8.
Failure times of one type aircraft-engine component were recorded. In addition, life times are periodically recorded for unfailed engine components. The data are considered as multiple s-independent grouped censored samples with failure times known. The assumed failure model is the 2-parameter Weibull distribution. Maximum likelihood estimates are derived. The exponential model is used for comparison. Monte Carlo simulation is used to derive s-bias and mean square error of the estimates. The asymptotic covariance matrix was computed for the sampling conditions studied. The maximum likelihood estimates of the reliability were obtained as a function of component operating time since overhaul.  相似文献   

9.
10.
CTMC (continuous-time Markov chains) are a commonly used formalism for modeling fault-tolerant systems. One of the major drawbacks of CTMC is the well-known state-space explosion problem. This paper develops and analyzes a method (SC-BM) to compute bounds for the reliability of nonrepairable fault-tolerant systems in which only a portion of the state space of the CTMC is generated. SC-BM uses the failure distance concept as the method described previously by the authors (1997) but, unlike that method, which is based on the computation of exact failure distances, SC-BM uses lower bounds for failure distances, which are computed on the system fault-tree, avoiding the computation and holding of all minimal cuts as required in the earlier work. This is important because computation of all minimal cuts is NP-hard and the number of minimal cuts can be very large. In some cases SC-BM gives exactly the same bounds as the previous method; in other cases it gives less tight bounds. SC-BM computes tight bounds for the reliability of quite complex systems with an affordable number of generated states for short to quite large mission times. The analysis of several examples seems to show that the bounds obtained by SC-BM appreciably outperform those obtained by simpler methods, and, when they are not equal, are only slightly worse than the bounds obtained by the previous method. In addition, the overhead in CPU time due to computing lower bounds for failure distances seems to be reasonable  相似文献   

11.
In this paper we obtain transient solutions of a software reliability model under the assumption that the failure rate is proportional to the remaining errors in the software under imperfect debugging and error generation. The maximum number of errors in the software is assumed to be finite. We obtain the transient probabilities for the remaining errors, mean number of errors remaining, reliability of the software, expected number of failures etc. A method to estimate model parameters is given. Finally, a numerical example is presented.  相似文献   

12.
An exponential software reliability growth model (SRGM) incorporating a situation where more than one failure can be attributed to one fault/error has been proposed. Different model parameters have been estimated and an optimal release policy which minimizes the software cost, subject to achieving a given level of reliability, has been discussed. Finally, a numerical example illustrating the applicability of the model and a few practical situations where this model can be applied are also given.  相似文献   

13.
In this paper investigations have been carried out for the availability and mean time to failure analysis of a three unit repairable electronic equipment having three states; viz; good, degraded and failed under critical human errors. The three states three units repairable electronic equipment suffers two types of failures; viz; unit failure and failure due to critical human errors. Entire system can fail due to critical human errors. The failure and repair times for the system follow exponential and general distributions respectively. Laplace transforms of the probabilities of the complex system being in various states are obtained along with steady state behaviour of the equipment. A numerical example has also been appended to highlight the important results. Three graphs have also been given in the end. There is only one repair facility, which is availed only when the system is in either degraded or failed state due to unit failure.  相似文献   

14.
Context modeling is an extensively studied paradigm for lossless compression of continuous-tone images. However, without careful algorithm design, high-order Markovian modeling of continuous-tone images is too expensive in both computational time and space to be practical. Furthermore, the exponential growth of the number of modeling states in the order of a Markov model can quickly lead to the problem of context dilution; that is, an image may not have enough samples for good estimates of conditional probabilities associated with the modeling states. New techniques for context modeling of DPCM errors are introduced that can exploit context-dependent DPCM error structures to the benefit of compression. New algorithmic techniques of forming and quantizing modeling contexts are also developed to alleviate the problem of context dilution and reduce both time and space complexities. By innovative formation, quantization, and use of modeling contexts, the proposed lossless image coder has a highly competitive compression performance and yet remains practical.  相似文献   

15.
在各类数字通信系统以及计算机存储和运算系统经常利用差错控制编码降低误码率,提高通信质量,满足对数据传输通道可靠性的要求。RS码是一种性能优良的前向纠错码,具有同时纠正随机错误和突发错误的能力,它的构造特点决定了其非常适合于纠正突发性错误。文中在阐述RS系统码编译码原理的基础上,提出了RS(16,12)缩短码的编译码方法,利用MATLAB对R S(16,12)缩短码在高斯信道和瑞利信道条件下的纠错能力进行仿真,并分析其纠错性能。  相似文献   

16.
This paper presents an approach to system reliability modeling where failures and errors are not statistically independent. The repetition of failures and errors until their causes are removed is affected by the system processes and degrades system reliability. Four types of failures are introduced: hardware transients, software and hardware design errors, and program faults. Probability of failure, mean time to failure, and system reliability depend on the type of failure. Actual measurements show that the most critical factor for system reliability is the time after occurrence of a failure when this failure can be repeated in every process that accesses a failed component. An example involving measurements collected in an IBM 4331 installation validates the model and shows its applications. The degradation of system reliability can be appreciable even for very short periods of time. This is why the conditional probability of repetition of failures is introduced. The reliability model allows prediction of system reliability based on the calculation of the mean time to failure. The comparison with the measurement results shows that the model with process dependent repetition of failures approximates system reliability with better accuracy than the model with the assumption of independent failures.  相似文献   

17.
In wireless mobile networks, quantities such as call blocking probability, call dropping probability, handoff probability, handoff rate, and the actual call holding times for both complete and incomplete calls are very important performance parameters in the network performance evaluation and design. In the past, their analytical computations are given only when the classical exponential assumptions for all involved time variables are imposed. In this paper, we relax the exponential assumptions for the involved time variables and, under independence assumption on the cell residence times, derive analytical formulae for these parameters using a novel unifying analytical approach. It turns out that the computation of many performance parameters is boiled down to computing a certain type of probability, and the obtained analytical results can be easily applied when the Laplace transform of probability density function of call holding time is a rational function. Thus, easily computable results can be obtained when the call holding time is distributed with the mixed-Erlang distribution, a distribution model having universal approximation capability. More importantly, this paper develops a new analytical approach to performance evaluation for wireless networks and mobile computing systems.  相似文献   

18.
Extraction of electrical contact parameters by transmission line model (TLM) test structures is based on the assumption that the electrical and geometrical parameters of the contacts within a structure are identical. Statistical modeling is presented to show that the usual extraction procedure can lead to large errors in the extracted contact parameters in the case of nonideal TLM structures, even if there is no error in the measured electrical and geometrical parameters. Errors in the extracted parameters as functions of the nominal contact parameters are presented graphically. These graphs can be used to assess the accuracy of the TLM measurements and to optimize TLM test structure design. In the parameter intervals examined, the TLM method works best (yields the fewest errors in all extracted parameters) when the geometrical length of a contact is equal to about two times the electrical transfer length  相似文献   

19.
The Goel-Okumoto (1979) model of reliability growth is applied to the failure data for a large software development project. This is done by fitting the model parameters to the error counts and test times, which are periodically tabulated. In the early phases of system testing, the problem of inaccurate estimation due to insufficient reliability growth occurs. A simple test is devised to measure the stability of the data with respect to this problem. In practice, such a test on the data can be used to determine when system testing has sufficiently progressed to allow one to expect reasonable estimates from reliability growth models  相似文献   

20.
A unit is placed on test for a fixed time, and the number of failures is observed. The stochastic process generating the failures is assumed to have s-independent, Erlang distributed times between failures. Bayes estimates of reciprocal MTBF (RMTBF) and reliability are given where the loss function is squared error and the prior distribution for RMTBF is gamma. We investigate what happens to the Bayes estimates when the shape parameter in the failure model is incorrectly specified (e.g., the failure model is assumed to be Poisson when it is not). This question is answered for parameters which are typical of a wide range of actual military equipment failure data. As the shape parameter in the failure model changes 1) there is only a small to moderate change in the estimates of RMTBF; 2) there is a small to moderate change in the estimate of reliability for small numbers of failures but a larger change for an unusually large number of failures; 3) there is little change in the s-efficiencies of the estimates as measured by s-expected squared error loss. For the range of parameters in this study, not much is lost in s-efficiency by restricting attention to the mathematically tractable Erlang failure model instead of using a more general gamma failure model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号