首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 375 毫秒
1.
The Bayesian reliability estimation under fuzzy environments is proposed in this paper. In order to apply the Bayesian approach, the fuzzy parameters are assumed to be fuzzy random variables with fuzzy prior distributions. The (conventional) Bayesian estimation method will be used to create the fuzzy Bayes point estimator of reliability by invoking the well-known theorem called ‘Resolution Identity’ in fuzzy sets theory. On the other hand, we also provide the computational procedures to evaluate the membership degree of any given Bayes point estimate of reliability. In order to achieve this purpose, we transform the original problem into a nonlinear programming problem. This nonlinear programming problem is then divided into four subproblems for the purpose of simplifying computation. Finally, the subproblems can be solved by using any commercial optimizers, e.g. GAMS or LINGO.  相似文献   

2.
The main purpose of this paper is to provide a methodology for discussing the fuzzy. Bayesian system reliability from the fuzzy component reliabilities, actually we discuss on the Fuzzy Bayesian system reliability assessment based on Pascal distribution, because the data sometimes cannot be measured and recorded precisely. In order to apply the Bayesian approach, the fuzzy parameters are assumed as fuzzy random variables with fuzzy prior distributions. The (conventional) Bayes estimation method will be used to create the fuzzy Bayes point estimator of system reliability by invoking the well-known theorem called ‘Resolution Identity’ in fuzzy sets theory. On the other hand, we also provide the computational procedures to evaluate the membership degree of any given Bayes point estimate of system reliability. In order to achieve this purpose, we transform the original problem into a nonlinear programming problem. This nonlinear programming problem is then divided into four sub-problems for the purpose of simplifying computation. Finally, the sub problems can be solved by using any commercial optimizers, e.g. GAMS or LINGO.  相似文献   

3.
In Bayesian classifier learning, estimating the joint probability distribution p( x ,y) or the likelihood p( x |y) directly from training data is considered to be difficult, especially in large multidimensional data sets. To circumvent this difficulty, existing Bayesian classifiers such as Naive Bayes, BayesNet, and AηDE have focused on estimating simplified surrogates of p( x ,y) from different forms of one‐dimensional likelihoods. Contrary to the perceived difficulty in multidimensional likelihood estimation, we present a simple generic ensemble approach to estimate multidimensional likelihood directly from data. The idea is to aggregate pi( x |y) estimated from a random subsample of data . This article presents two ways to estimate multidimensional likelihoods using the proposed generic approach and introduces two new Bayesian classifiers called ENNBayes and MassBayes that estimate pi( x |y) using a nearest‐neighbor density estimation and a probability estimation through feature space partitioning, respectively. Unlike the existing Bayesian classifiers, ENNBayes and MassBayes have constant training time and space complexities and they scale better than existing Bayesian classifiers in very large data sets. Our empirical evaluation shows that ENNBayes and MassBayes yield better predictive accuracy than the existing Bayesian classifiers in benchmark data sets.  相似文献   

4.
Intrigued by some recent results on impulse response estimation by kernel and nonparametric techniques, we revisit the old problem of transfer function estimation from input–output measurements. We formulate a classical regularization approach, focused on finite impulse response (FIR) models, and find that regularization is necessary to cope with the high variance problem. This basic, regularized least squares approach is then a focal point for interpreting other techniques, like Bayesian inference and Gaussian process regression. The main issue is how to determine a suitable regularization matrix (Bayesian prior or kernel). Several regularization matrices are provided and numerically evaluated on a data bank of test systems and data sets. Our findings based on the data bank are as follows. The classical regularization approach with carefully chosen regularization matrices shows slightly better accuracy and clearly better robustness in estimating the impulse response than the standard approach–the prediction error method/maximum likelihood (PEM/ML) approach. If the goal is to estimate a model of given order as well as possible, a low order model is often better estimated by the PEM/ML approach, and a higher order model is often better estimated by model reduction on a high order regularized FIR model estimated with careful regularization. Moreover, an optimal regularization matrix that minimizes the mean square error matrix is derived and studied. The importance of this result lies in that it gives the theoretical upper bound on the accuracy that can be achieved for this classical regularization approach.  相似文献   

5.
A minimax estimation problem in multidimensional linear regression model containing uncertain parameters and random quantities is considered. Simultaneous distribution of random quantities that are a part of the observation model is not prescribed exactly; however, it has a fixed mean and a covariance matrix from the given set. For estimation algorithm optimization, we applied a minimax approach with the risk measure in the form of the exceedance probability of the estimate of a prescribed level by an error. It was shown that a linear estimation problem is equivalent to the minimax problem with the mean-square criterion. In addition, the corresponding linear estimate will be the best (in the minimax sense) by the probabilistic criterion at the class of all unbiased estimates. The least favorable distribution of random model parameters is also constructed. Several partial cases and a numerical example are considered.  相似文献   

6.
The ability to accurately and consistently estimate software development efforts is required by the project managers in planning and conducting software development activities. Since software effort drivers are vague and uncertain, software effort estimates, especially in the early stages of the development life cycle, are prone to a certain degree of estimation errors. A software effort estimation model which adopts a fuzzy inference method provides a solution to fit the uncertain and vague properties of software effort drivers. The present paper proposes a fuzzy neural network (FNN) approach for embedding artificial neural network into fuzzy inference processes in order to derive the software effort estimates. Artificial neural network is utilized to determine the significant fuzzy rules in fuzzy inference processes. We demonstrated our approach by using the 63 historical project data in the well-known COCOMO model. Empirical results showed that applying FNN for software effort estimates resulted in slightly smaller mean magnitude of relative error (MMRE) and probability of a project having a relative error of less than or equal to 0.25 (Pred(0.25)) as compared with the results obtained by just using artificial neural network and the original model. The proposed model can also provide objective fuzzy effort estimation rule sets by adopting the learning mechanism of the artificial neural network.  相似文献   

7.
In this article, the fuzzy concepts are applied in analysis of the system reliability problem. The fuzzy number is used to construct the fuzzy reliability of the non-repairable multi-state series–parallel system (NMSS). The fuzzy failure rate function is represented by an exponential fuzzy number. By using this innovative approach, the fuzzy system reliability of NMSS is created. In order to analyse this fuzzy system reliability, the fuzzy Bayesian point estimate of fuzzy system reliability is made by the conventional Bayesian formula. And, the posterior fuzzy system reliability of NMSS is developed by Bayesian inference with fuzzy probabilities. Finally, the performance of the method is measured by the mean square error of fuzzy Bayesian point estimate for the fuzzy system reliability of NMSS.  相似文献   

8.
Vague sets were first proposed by Gau and Buehrer [11] as an extension of fuzzy sets which encompass fuzzy sets, inter-valued fuzzy sets, as special cases. Vague sets consist of two parts, that is, the membership function and nonmembership function. Therefore, in accordance with practical demand these sets are more flexible than the existing fuzzy sets and provide much more information about the situation. In this paper, a new approach for the ranking of trapezoidal vague sets is introduced. Shortcomings in some existing ranking approaches have been pointed out. Validation of the proposed ranking method has been established through the reasonable properties of the fuzzy quantities. Further, the proposed ranking approach is applied to develop a new method for dealing with vague risk analysis problems to find the probability of failure, of each component of compressor system, which could be used for managerial decision making and future system maintenance strategy. Also, the proposed method provides a useful way for handling vague risk analysis problems.  相似文献   

9.
The problem under consideration is how to estimate the frequency function of a system and the associated estimation error when a set of possible model structures is given and then one of them is known to contain the true system. The “classical” solution to this problem is to, first, use a consistent model structure selection criterion to discard all but one single structure, second, estimate a model in this structure and, third, conditioned on the assumption that the chosen structure contains the true system, compute an estimate of the estimation error. For a finite data set, however, one cannot guarantee that the correct structure is chosen, and this “structural” uncertainty is lost in the previously mentioned approach. In this contribution a method is developed that combines the frequency function estimates and the estimation errors from all possible structures into a joint estimate and estimation error. Hence, this approach bypasses the structure selection problem. This is accomplished by employing a Bayesian setting. Special attention is given to the choice of priors. With this approach it is possible to benefit from a priori information about the frequency function even though the model structure is unknown  相似文献   

10.
基数估计是实现数据库多表连接(JOIN)查询优化的重要手段之一。对数据量较大的数据表进行基数估计时常用数据抽样来获得较小的样本,从而估计各种查询负载下所需的数据基数。在单表上利用数据抽样来完成基数估计的方法已经得到广泛研究,但在多个数据表的抽样样本总体存储预算存在限制时,目前仍缺乏有效的多表间样本数划分方法使得整体基数估计达到较优。为此,提出一种面向多表JOIN查询优化的基数估计方法,针对一组给定的含有复杂多JOIN操作的查询负载,为其合理分配数据库中每个表的抽样率,从而在满足样本大小总和限制的同时使得基数估计准确率达到最高。将上述过程抽象为一个抽样率分配搜索问题,在数据库数据抽样问题中引入贝叶斯优化搜索算法,利用该算法快速搜索出不同表之间抽样样本大小的分配比例,使得有限时间内获得的样本分配方案对应的基数估计准确率最高,从而达到查询优化的目的。在TPC-H数据集上的实验结果表明,在相同时间内确定多JOIN操作查询负载下基数估计准确率最高的抽样比例方案时,相比随机搜索算法,贝叶斯优化算法所得方案对应的基数估计误差率降低54.8%~60.2%。  相似文献   

11.
Bayesian support vector regression using a unified loss function   总被引:4,自引:0,他引:4  
In this paper, we use a unified loss function, called the soft insensitive loss function, for Bayesian support vector regression. We follow standard Gaussian processes for regression to set up the Bayesian framework, in which the unified loss function is used in the likelihood evaluation. Under this framework, the maximum a posteriori estimate of the function values corresponds to the solution of an extended support vector regression problem. The overall approach has the merits of support vector regression such as convex quadratic programming and sparsity in solution representation. It also has the advantages of Bayesian methods for model adaptation and error bars of its predictions. Experimental results on simulated and real-world data sets indicate that the approach works well even on large data sets.  相似文献   

12.
In this paper, we consider the problem of performing quantitative Bayesian inference and model averaging based on a set of qualitative statements about relationships. Statements are transformed into parameter constraints which are imposed onto a set of Bayesian networks. Recurrent relationship structures are resolved by unfolding in time to Dynamic Bayesian networks. The approach enables probabilistic inference by model averaging, i.e. it allows to predict probabilistic quantities from a set of qualitative constraints without probability assignment on the model parameters. Model averaging is performed by Monte Carlo integration techniques. The method is applied to a problem in a molecular medical context: We show how the rate of breast cancer metastasis formation can be predicted based solely on a set of qualitative biological statements about the involvement of proteins in metastatic processes.  相似文献   

13.
Estimating reliable class-conditional probability is the prerequisite to implement Bayesian classifiers, and how to estimate the probability density functions (PDFs) is also a fundamental problem for other probabilistic induction algorithms. The finite mixture model (FMM) is able to represent arbitrary complex PDFs by using a mixture of mutimodal distributions, but it assumes that the component mixtures follows a given distribution, which may not be satisfied for real world data. This paper presents a non-parametric kernel mixture model (KMM) based probability density estimation approach, in which the data sample of a class is assumed to be drawn by several unknown independent hidden subclasses. Unlike traditional FMM schemes, we simply use the k-means clustering algorithm to partition the data sample into several independent components, and the regional density diversities of components are combined using the Bayes theorem. On the basis of the proposed kernel mixture model, we present a three-step Bayesian classifier, which includes partitioning, structure learning, and PDF estimation. Experimental results show that KMM is able to improve the quality of estimated PDFs of conventional kernel density estimation (KDE) method, and also show that KMM-based Bayesian classifiers outperforms existing Gaussian, GMM, and KDE-based Bayesian classifiers.  相似文献   

14.
Many real applications of Bayesian networks (BN) concern problems in which several observations are collected over time on a certain number of similar plants. This situation is typical of the context of medical monitoring, in which several measurements of the relevant physiological quantities are available over time on a population of patients under treatment, and the conditional probabilities that describe the model are usually obtained from the available data through a suitable learning algorithm. In situations with small data sets for each plant, it is useful to reinforce the parameter estimation process of the BN by taking into account the observations obtained from other similar plants. On the other hand, a desirable feature to be preserved is the ability to learn individualized conditional probability tables, rather than pooling together all the available data. In this work we apply a Bayesian hierarchical model able to preserve individual parameterization, and, at the same time, to allow the conditionals of each plant to borrow strength from all the experience contained in the data-base. A testing example and an application in the context of diabetes monitoring will be shown  相似文献   

15.
We consider the enhancement of speech corrupted by additive white Gaussian noise. In a Bayesian inference framework, maximum a posteriori (MAP) estimation of the signal is performed, along the lines developed by Lim & Oppenheim (1978). The speech enhancement problem is treated as a signal estimation problem, whose aim is to obtain a MAP estimate of the clean speech signal, given the noisy observations. The novelty of our approach, over previously reported work, is that we relate the variance of the additive noise and the gain of the autoregressive (AR) process to hyperparameters in a hierarchical Bayesian framework. These hyperparameters are computed from the noisy speech data to maximize the denominator in Bayes formula, also known as the evidence. The resulting Bayesian scheme is capable of performing speech enhancement from the noisy data without the need for silence detection. Experimental results are presented for stationary and slowly varying additive white Gaussian noise. The Bayesian scheme is also compared to the Lim and Oppenheim system, and the spectral subtraction method.  相似文献   

16.
Asymptotic bayesian surface estimation using an image sequence   总被引:1,自引:1,他引:0  
A new approach is introduced to estimating object surfaces in three-dimensional space from a sequence of images. A 3D surface of interest here is modeled as a function known up to the values of a few parameters. Surface estimation is then treated as the general problem of maximum-likelihood parameter estimation based on two or more functionally related data sets. In our case, these data sets constitute a sequence of images taken at different locations and orientations. Experiments are run to illustrate the various advantages of using as many images as possible in the estimation and of distributing camera positions from first to last over as large a baseline as possible. In order to extract all the usable information from the sequence of images, all the images should be available simultaneously for the parameter estimation. We introduce the use of asymptotic Bayesian approximations in order to summarize the useful information in a sequence of images, thereby drastically reducing both the storage and the amount of processing required. This leads to a sequential Bayesian estimator for the surface parameters, where the information extracted from previous images is summarized in a quadratic form. The attractiveness of our approach is that now all the usual tools of statistical signal analysis, for example, statistical decision theory for object recognition, can be brought to bear; the information extraction appears to be robust and computationally reasonable; the concepts are geometric and simple; and essentially optimal accuracy should result. Experimental results are shown for extending this approach in two ways. One is to model a highly variable surface as a collection of small patches jointly constituting a stochastic process (e.g., a Markov random field) and to reconstruct this surface using maximum a posteriori probability (MAP) estimation. The other is to cluster together those patches constituting the same primitive object through the use of MAP segmentation. This provides a simultaneous estimation and segmentation of a surface into primitive constituent surfaces.  相似文献   

17.
Frequency domain identification of the complex Young's modulus of a viscoelastic solid from wave propagation experiment data is considered in this paper. The conventional techniques use a non-parametric approach, where the estimate at a particular frequency is independent of the estimate at another frequency. Here we show that the estimation accuracy can be improved significantly by taking the continuity of the complex modulus into account. In particular, we propose Bayesian estimation approaches to impose the continuity property. The estimation algorithms are computationally efficient. The analytical claims are also substantiated in experimental studies.  相似文献   

18.
Conclusion The article shows that the well-known ellipsoidal estimation formalism can be constructively extended to the case of estimating sets of a more general form. Mathematical models of these sets are defined not only by positive definite quadratic forms (as in the case with ellipsoids), but also by Lyapunov, Bellman, and other functions. We have considered the parametric properties of estimates represented by fuzzy estimating sets. A formalism approximating the set-theoretical intersection operation has been developed for these fuzzy estimates. An important feature of the proposed approach is that the intersection procedure is optimized not in each step or cycle, but over the entire sequence of steps in accordance with an additive criterion which is fairly natural for the relevant class of problems. We have thus established a relationship between optimal fuzzy set-theoretical estimation procedures and standard optimal algorithms. In the concluding part of the article we have considered state estimation of a static object from observations distorted by additive noise. The state estimation problem has been solved in the class of fuzzy ellipsoidal estimates with a so-called tolerant part. The linear “size“ or “radius“ of the tolerant part provides an efficient measure of the “degree of fuzziness“ of the estimate. On the whole, the estimation (filtering) algorithm proposed in this article, like other fuzzy estimation algorithms, is robust to prior errors. The study has been supported by US CRDF grant VE 2300. Translated from Kibernetika i Sistemnyi Analiz, No. 4, pp. 167–183, July–August, 1998.  相似文献   

19.
We apply the idea of averaging ensembles of estimators to probability density estimation. In particular, we use Gaussian mixture models which are important components in many neural-network applications. We investigate the performance of averaging using three data sets. For comparison, we employ two traditional regularization approaches, i.e., a maximum penalized likelihood approach and a Bayesian approach. In the maximum penalized likelihood approach we use penalty functions derived from conjugate Bayesian priors such that an expectation maximization (EM) algorithm can be used for training. In all experiments, the maximum penalized likelihood approach and averaging improved performance considerably if compared to a maximum likelihood approach. In two of the experiments, the maximum penalized likelihood approach outperformed averaging. In one experiment averaging was clearly superior. Our conclusion is that maximum penalized likelihood gives good results if the penalty term in the cost function is appropriate for the particular problem. If this is not the case, averaging is superior since it shows greater robustness by not relying on any particular prior assumption. The Bayesian approach worked very well on a low-dimensional toy problem but failed to give good performance in higher dimensional problems.  相似文献   

20.
An outlier is a data point that contains no information about the system to be estimated. A procedure is developed, using a Bayesian cost criterion, to detect and eliminate outliers from a data base and at the same time provide estimates of the state of a dynamical system. The approach is applied to a Gauss-Markov discrete-time system and to a parameter estimation problem. For the latter case, exact solutions of estimator bias and convariance are obtained and conditions for filter divergence are discussed. The approach in this paper differs from others in that a maximum a posteriori estimate is obtained over long block lengths of data so that clustering schemes can be employed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号