首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The increasing complexity of software systems in embedded systems or industrial business domains has led to the importance of reliability analysis for current systems. Reliability analysis has become a crucial part of the system development life cycle, and a new approach is needed to enable an early analysis for reliability estimation, especially for the system under design. However, the existing approach neglects the correlation between system resource and system task for estimating system reliability. This subsequently restricts accuracy of the estimation as well as causing difficulties in identifying critical resources and tasks during the design phase. This paper proposes a model-driven system reliability estimation using a scenario-based approach to estimate system reliability and identify its critical resources and system tasks during the design phase. This model is based on the PerFAM model, which can specifically view timing failures through a system scenario. The proposed approach is validated by the application of a sensitivity analysis into one case study. The case study demonstrates an essential relationship between system reliability, as well as both resources and tasks, which ultimately becomes the integral part for a system reliability estimation assessment.  相似文献   

2.
The main purpose of this paper is to provide a methodology for discussing the fuzzy. Bayesian system reliability from the fuzzy component reliabilities, actually we discuss on the Fuzzy Bayesian system reliability assessment based on Pascal distribution, because the data sometimes cannot be measured and recorded precisely. In order to apply the Bayesian approach, the fuzzy parameters are assumed as fuzzy random variables with fuzzy prior distributions. The (conventional) Bayes estimation method will be used to create the fuzzy Bayes point estimator of system reliability by invoking the well-known theorem called ‘Resolution Identity’ in fuzzy sets theory. On the other hand, we also provide the computational procedures to evaluate the membership degree of any given Bayes point estimate of system reliability. In order to achieve this purpose, we transform the original problem into a nonlinear programming problem. This nonlinear programming problem is then divided into four sub-problems for the purpose of simplifying computation. Finally, the sub problems can be solved by using any commercial optimizers, e.g. GAMS or LINGO.  相似文献   

3.
To have effective production planning and control, it is necessary to calculate the reliability and availability of a production system as a whole. Considering only machine reliability in the calculations would most likely result unmet due dates. In this study, a new modelling approach for determining the reliability and availability of a production system is proposed by considering all the components of the system and their hierarchy in the system structure. Components of a production system are defined as production processes; components of the processes are defined as sub-processes. In this hierarchical structure we could model all kinds of failures such as material and supply, management and personnel, and machine and equipment. In the analysis, a fuzzy Bayesian method is used to quantify the uncertainties in the production environment. The suggested modelling approach is illustrated on an example. In the example, also a separate reliability and availability analysis is conducted which only considered machine failures, and the results of both analyses are compared.  相似文献   

4.
This paper presents a mathematical model for a new approach to calculating the confidence intervals for software reliability projections. Unlike those calculated by current methods, these confidence intervals account for any uncertainty concerning the operational profile of the system  相似文献   

5.
6.
The motion of comic characters includes different types of movements, such as walking or running. In a comic, a movement may be described by a series of non‐continuous poses in a sequence of contiguous frames. Each pose exists in a frame. We synthesize an animation according to still comic frames. In this paper, we propose a model to analyze time series of a character's motion using the non‐parametric Bayesian approach. Then we can automatically generate a sequence of motions by using the estimated time series. Experimental results show that the built time series model best matches the given frames. Furthermore, unnatural distortions of the results are minimized. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

7.
In a previous paper the convenience of using Martingale theory in the analysis of Bayesian least-squares estimation was demonstrated. However, certain restrictions had to be imposed on either the feedback structure or on the initial values for the estimation. In the present paper these restrictions are removed, and necessary and sufficient conditions for strong consistency (in a Bayesian sense) are given for the Gaussian white noise case without any assumptions on closed loop stability or on the feedback structure. In the open-loop case the poles are shown to be consistently estimated, almost everywhere, and in the closed-loop case certain choices of control law are shown to assure consistency. Finally adaptive control laws are treated, and implicit self-tuning regulators are shown to converge to the desired control laws.  相似文献   

8.
传统的可靠性评估方法都是基于系统软件运行期间的失效,对于武器系统软件,由于其使用试验耗费巨资且周期很长,不可能对系统进行过多的使用试验,导致难以采集到高质量的失效数据。提出一种基于系统状态验证覆盖的Bayes软件可靠性评估方法,该方法以Bayes可靠性模型为评估准则,通过状态覆盖率来保证充分性,通过状态测试验证来保证可靠性,提倡可信性与可靠性并行增长。  相似文献   

9.
We present a Bayesian approach for modeling heterogeneous data and estimate multimodal densities using mixtures of Skew Student-t-Normal distributions [Gómez, H.W., Venegas, O., Bolfarine, H., 2007. Skew-symmetric distributions generated by the distribution function of the normal distribution. Environmetrics 18, 395-407]. A stochastic representation that is useful for implementing a MCMC-type algorithm and results about existence of posterior moments are obtained. Marginal likelihood approximations are obtained, in order to compare mixture models with different number of component densities. Data sets concerning the Gross Domestic Product per capita (Human Development Report) and body mass index (National Health and Nutrition Examination Survey), previously studied in the related literature, are analyzed.  相似文献   

10.
11.
复杂系统可靠性估计的模糊神经Petri网方法   总被引:1,自引:0,他引:1  
针对复杂系统可靠性建模难问题,提出了一种新的适用于复杂系统可靠性估计的模糊神经Petri网(简称为FNPN).文中首先给出了模糊神经Petri网的定义及其引发规则,然后给出了一种学习算法.该FNPN结合了模糊Petri网和神经网络各自的优点,既可以表示和处理模糊产生式规则的知识库系统又具有学习能力,可通过对样本数据学习调整模型中的参数以获得系统内部的等效结构,从而计算出非样本数据的系统的可靠度.最后以一无向网络为例说明该方法是可行的.  相似文献   

12.
In this paper, two models predicting mean time until next failure based on Bayesian approach are presented. Times between failures follow Weibull distributions with stochastically decreasing ordering on the hazard functions of successive failure time intervals, reflecting the tester's intent to improve the software quality with each corrective action. We apply the proposed models to actual software failure data and show they give better results under sum of square errors criteria as compared to previous Bayesian models and other existing times between failures models. Finally, we utilize likelihood ratios criterion to compare new model's predictive performance  相似文献   

13.
Bayesian approaches have been widely used in quantitative trait locus (QTL) linkage analysis in experimental crosses, and have advantages in interpretability and in constructing parameter probability intervals. Most existing Bayesian linkage methods involve Monte Carlo sampling, which is computationally prohibitive for high-throughput applications such as eQTL analysis. In this paper, we present a Bayesian linkage model that offers directly interpretable posterior densities or Bayes factors for linkage. For our model, we employ the Laplace approximation for integration over nuisance parameters in backcross (BC) and F2 intercross designs. Our approach is highly accurate, and very fast compared with alternatives, including grid search integration, importance sampling, and Markov Chain Monte Carlo (MCMC). Our approach is thus suitable for high-throughput applications. Simulated and real datasets are used to demonstrate our proposed approach.  相似文献   

14.
在小子样条件下,通过随机加权重采样技术,获得兴趣变量的分布函数,将其作为先验信息与当前样本信息相结合,然后通过贝叶斯方法对可靠性参数进行估计。仿真算例证明,这种方法在进行可靠性参数估计时较经典参数估计方法有更高的精确性。  相似文献   

15.
16.
Archimedean copula estimation using Bayesian splines smoothing techniques   总被引:1,自引:0,他引:1  
Copulas enable to specify multivariate distributions with given marginals. Various parametric proposals were made in the literature for these quantities, mainly in the bivariate case. They can be systematically derived from multivariate distributions with known marginals, yielding e.g. the normal and the Student copulas. Alternatively, one can restrict his/her interest to a sub-family of copulas named Archimedean. They are characterized by a strictly decreasing convex function on (0,1) which tends to +∞ at 0 (when strict) and which is 0 at 1. A ratio approximation of the generator and of its first derivative using B-splines is proposed and the associated parameters estimated using Markov chains Monte Carlo methods. The estimation is reasonably quick. The fitted generator is smooth and parametric. The generated chain(s) can be used to build “credible envelopes” for the above ratio function and derived quantities such as Kendall's tau, posterior predictive probabilities, etc. Parameters associated to parametric models for the marginals can be estimated jointly with the copula parameters. This is an interesting alternative to the popular two-step procedure which assumes that the regression parameters are fixed known quantities when it comes to copula parameter(s) estimation. A simulation study is performed to evaluate the approach. The practical utility of the method is illustrated by a basic analysis of the dependence structure underlying the diastolic and the systolic blood pressures in male subjects.  相似文献   

17.
The problem of modeling the revision of the information of a decision maker based on the information of the expert sources is considered. The basic model assumes that the information of the decision maker and expert sources is in the form of the probability mass functions. The modeling approach is Bayesian estimation, which relies on Kullback entropy and Shannon entropy for information measurement, and produces a unique solution. Modeling of the problem not only considers information about the statistical dependence of the expert sources, but also uses information to measure the quality and importance of the individual expert sources in the form of rank ordering. The outcome shows that the effects of the dependence and rank ordering of the expert sources on the final decision cannot be isolated, In a special case where this isolation is possible, the effect of rank ordering decreases with the increase in the value of the correlation coefficient from -1 to +1, and the effect of the correlation never exceeds the effect of rank ordering. Sensitivity analysis is performed to explore other properties of the model related to the influence of the decision maker and expert sources. Extensions of the basic modeling to group decision making, group consensus, and mean value information are presented  相似文献   

18.
A Bayesian approach is used to estimate and to find highest posterior density intervals for R(t)=P(X12#62;t, X22#62;t) when (X1, X2) follow the Gumbel bivariate exponential distribution. Because of the complexity of the likelihood function, numerical integration must be used and, for this setting, Jacobi and Laguerre rules are employed as they arise naturally. A data set from an application is used to illustrate the procedures.  相似文献   

19.
Analogy-based software effort estimation using Fuzzy numbers   总被引:1,自引:0,他引:1  

Background

Early stage software effort estimation is a crucial task for project bedding and feasibility studies. Since collected data during the early stages of a software development lifecycle is always imprecise and uncertain, it is very hard to deliver accurate estimates. Analogy-based estimation, which is one of the popular estimation methods, is rarely used during the early stage of a project because of uncertainty associated with attribute measurement and data availability.

Aims

We have integrated analogy-based estimation with Fuzzy numbers in order to improve the performance of software project effort estimation during the early stages of a software development lifecycle, using all available early data. Particularly, this paper proposes a new software project similarity measure and a new adaptation technique based on Fuzzy numbers.

Method

Empirical evaluations with Jack-knifing procedure have been carried out using five benchmark data sets of software projects, namely, ISBSG, Desharnais, Kemerer, Albrecht and COCOMO, and results are reported. The results are compared to those obtained by methods employed in the literature using case-based reasoning and stepwise regression.

Results

In all data sets the empirical evaluations have shown that the proposed similarity measure and adaptation techniques method were able to significantly improve the performance of analogy-based estimation during the early stages of software development. The results have also shown that the proposed method outperforms some well know estimation techniques such as case-based reasoning and stepwise regression.

Conclusions

It is concluded that the proposed estimation model could form a useful approach for early stage estimation especially when data is almost uncertain.  相似文献   

20.
We consider the enhancement of speech corrupted by additive white Gaussian noise. In a Bayesian inference framework, maximum a posteriori (MAP) estimation of the signal is performed, along the lines developed by Lim & Oppenheim (1978). The speech enhancement problem is treated as a signal estimation problem, whose aim is to obtain a MAP estimate of the clean speech signal, given the noisy observations. The novelty of our approach, over previously reported work, is that we relate the variance of the additive noise and the gain of the autoregressive (AR) process to hyperparameters in a hierarchical Bayesian framework. These hyperparameters are computed from the noisy speech data to maximize the denominator in Bayes formula, also known as the evidence. The resulting Bayesian scheme is capable of performing speech enhancement from the noisy data without the need for silence detection. Experimental results are presented for stationary and slowly varying additive white Gaussian noise. The Bayesian scheme is also compared to the Lim and Oppenheim system, and the spectral subtraction method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号