首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We present a multimodal open-set speaker identification system that integrates information coming from audio, face and lip motion modalities. For fusion of multiple modalities, we propose a new adaptive cascade rule that favors reliable modality combinations through a cascade of classifiers. The order of the classifiers in the cascade is adaptively determined based on the reliability of each modality combination. A novel reliability measure, that genuinely fits to the open-set speaker identification problem, is also proposed to assess accept or reject decisions of a classifier. A formal framework is developed based on probability of correct decision for analytical comparison of the proposed adaptive rule with other classifier combination rules. The proposed adaptive rule is more robust in the presence of unreliable modalities, and outperforms the hard-level max rule and soft-level weighted summation rule, provided that the employed reliability measure is effective in assessment of classifier decisions. Experimental results that support this assertion are provided.  相似文献   

2.
Model-based reliability analysis is affected by different types of epistemic uncertainty, due to inadequate data and modeling errors. When the physics-based simulation model is computationally expensive, a surrogate has often been used in reliability analysis, introducing additional uncertainty due to the surrogate. This paper proposes a framework to include statistical uncertainty and model uncertainty in surrogate-based reliability analysis. Two types of surrogates have been considered: (1) general-purpose surrogate models that compute the system model output over the desired ranges of the random variables; and (2) limit-state surrogates. A unified approach to connect the model calibration analysis using the Kennedy and O’Hagan (KOH) framework to the construction of limit state surrogate and to estimating the uncertainty in reliability analysis is developed. The Gaussian Process (GP) general-purpose surrogate of the physics-based simulation model obtained from the KOH calibration analysis is further refined at the limit state (local refinement) to construct the limit state surrogate, which is used for reliability analysis. An efficient single-loop sampling approach using the probability integral transform is used for sampling the input variables with statistical uncertainty. The variability in the GP prediction (surrogate uncertainty) is included in reliability analysis through correlated sampling of the model predictions at different inputs. The Monte Carlo sampling (MCS) error, which represents the error due to limited Monte Carlo samples, is quantified by constructing a probability density function. All the different sources of epistemic uncertainty are quantified and aggregated to estimate the uncertainty in the reliability analysis. Two examples are used to demonstrate the proposed techniques.  相似文献   

3.
变结构离散动态贝叶斯网络(SVDDBN)处理不确定性问题更具有一般性,为了克服SVDDBN缺失数据会导致推理结果精度变差的问题,提出了一步预测的SVDDBN缺失数据插补算法。根据信息可以沿着网络的时间轴方向向下一个时间片传播的规律,利用“混合”信息在线进行信度更新,可得到滤波值,再通过进一步预测得到下一个时间片缺失数据节点的后验概率作为插补值。仿真结果表明:提出的算法能有效插补缺失数据,提高SVDDBN推理的精确度及可靠性。  相似文献   

4.
Diffusion magnetic resonance imaging (dMRI) tractography has the unique ability to reconstruct major white matter tracts non-invasively and is, therefore, widely used in neurosurgical planning and neuroscience. In this work, we reduce two sources of uncertainty within the tractography pipeline. The first one is the model uncertainty that arises in crossing fibre tractography, from having to estimate the number of relevant fibre compartments in each voxel. We propose a mathematical framework to estimate model uncertainty, and we reduce this type of uncertainty with a model averaging approach that combines the fibre direction estimates from all candidate models, weighted by the posterior probability of the respective model. The second source of uncertainty is measurement noise. We use bootstrapping to estimate this data uncertainty, and consolidate the fibre direction estimates from all bootstraps into a consensus model. We observe that, in most voxels, a traditional model selection strategy selects different models across bootstraps. In this sense, the bootstrap consensus also reduces model uncertainty. Either approach significantly increases the accuracy of crossing fibre tractography in multiple subjects, and combining them provides an additional benefit. However, model averaging is much more efficient computationally.  相似文献   

5.
Medical decisions are rarely made under conditions of complete certainty. In the past decade there has been a rapid growth of interest in formal methods for optimizing medical decisions under uncertainty. Application of decision-analytic methods requires physicians to make probability estimates about clinical events for which extensive data are not available. This paper describes a computer program to train physicians to be better probability estimators: to make probability estimates that are numerically meaningful for use in formal decision analyses. It is designed to be a stand-alone application requiring about 2 hours of physician time. Use requires an IBM-PC or compatible microcomputer with graphics adaptor and monitor, and 8087 coprocessor.  相似文献   

6.
In this paper, algorithms are developed to update the reliability-maximizing resource allocation in an unreliable flow network when either resource demand or the characteristic of the flow network changes. An unreliable flow network consists of nodes, characterized as source nodes, which supply resources of various types, sink nodes, at which resource demand takes place, and intermediate nodes, as well as unreliable directed arcs, which join pairs of nodes and whose capacities have multiple operational states. The network reliability of such an unreliable flow network is the probability that resources can be transmitted successfully from source nodes to sink nodes. With earlier developments on evaluating network reliability and resolving reliability-maximizing resource allocation problems in an unreliable flow network, one may recompute a new resource allocation strategy, when either resource demand or the characteristic of the flow network changes, without incorporating the efforts that have been made in obtaining the existing resource allocation. This study, in contrast, proposes updating, rather than recomputing, alternatives that take advantage of existing minimal path vectors and corresponding flow patterns. Procedural comparisons and numerical examples indicate that the updating schemes would perform better than the recomputing scheme in a large sized flow network that transmits various resource types.  相似文献   

7.
Boehm  B. 《Computer》2008,41(3):32-38
In the 21st century, software engineers face the often formidable challenges of simultaneously dealing with rapid change, uncertainty and emergence, dependability, diversity, and interdependence, but they also have opportunities to make significant contributions that will make a difference for the better.  相似文献   

8.
Early in a program, engineers must determine requirements for system reliability and availability. We suggest that existing techniques gathered from diverse fields can be incorporated within the framework of systems engineering methodology to accomplish this. Specifically, adopting probabilistic (Monte Carlo) design techniques allows the designer to incorporate uncertainty explicitly into the design process and to improve the designer's understanding of the root causes of failures and how often these might realistically occur. In high‐reliability systems in which failure occurs infrequently, rare‐event simulation techniques can reduce the computational burden of achieving this understanding. This paper provides an introductory survey of the literature on systems engineering, requirements engineering, Monte Carlo simulation, probabilistic design, and rare‐event simulation with the aim of assessing the degree to which these have been integrated in systems design for reliability. This leads naturally to a proposed framework for the fusion of these techniques.  相似文献   

9.
Obtaining good probability estimates is imperative for many applications. The increased uncertainty and typically asymmetric costs surrounding rare events increase this need. Experts (and classification systems) often rely on probabilities to inform decisions. However, we demonstrate that class probability estimates obtained via supervised learning in imbalanced scenarios systematically underestimate the probabilities for minority class instances, despite ostensibly good overall calibration. To our knowledge, this problem has not previously been explored. We propose a new metric, the stratified Brier score, to capture class-specific calibration, analogous to the per-class metrics widely used to assess the discriminative performance of classifiers in imbalanced scenarios. We propose a simple, effective method to mitigate the bias of probability estimates for imbalanced data that bags estimators independently calibrated over balanced bootstrap samples. This approach drastically improves performance on the minority instances without greatly affecting overall calibration. We extend our previous work in this direction by providing ample additional empirical evidence for the utility of this strategy, using both support vector machines and boosted decision trees as base learners. Finally, we show that additional uncertainty can be exploited via a Bayesian approach by considering posterior distributions over bagged probability estimates.  相似文献   

10.
Nonlinear and non-Gaussian processes with constraints are commonly encountered in dynamic estimation problems. Methods for solving such problems either ignore the constraints or rely on crude approximations of the model or probability distributions. Such approximations may reduce the accuracy of the estimates since they often fail to capture the variety of probability distributions encountered in constrained linear and nonlinear dynamic systems. This article describes a practical approach that overcomes these shortcomings via a novel extension of sequential Monte Carlo (SMC) sampling or particle filtering. Inequality constraints are imposed by accept/reject steps in the algorithm. The proposed approach provides samples representing the posterior distribution at each time point, and is shown to satisfy the same theoretical properties as unconstrained SMC. Illustrative examples show that results of the proposed approach are at least as accurate as moving horizon estimation, but computationally more efficient and in addition, the approach indicates the uncertainty associated with these estimates.  相似文献   

11.
Querying imprecise data in moving object environments   总被引:15,自引:0,他引:15  
In moving object environments, it is infeasible for the database tracking the movement of objects to store the exact locations of objects at all times. Typically, the location of an object is known with certainty only at the time of the update. The uncertainty in its location increases until the next update. In this environment, it is possible for queries to produce incorrect results based upon old data. However, if the degree of uncertainty is controlled, then the error of the answers to queries can be reduced. More generally, query answers can be augmented with probabilistic estimates of the validity of the answer. We study the execution of probabilistic range and nearest-neighbor queries. The imprecision in answers to queries is an inherent property of these applications due to uncertainty in data, unlike the techniques for approximate nearest-neighbor processing that trade accuracy for performance. Algorithms for computing these queries are presented for a generic object movement model and detailed solutions are discussed for two common models of uncertainty in moving object databases. We study the performance of these queries through extensive simulations.  相似文献   

12.
Duggan  J. Byrne  J. Lyons  G.J. 《Software, IEEE》2004,21(3):76-82
Task allocation during the construction stage of software engineering is complex and challenging. First, engineers must chart a path between the often conflicting objectives of time and quality. Second, a huge productivity variance exists across the spectrum of practicing software developers. Properly handling this variance amid those time and quality pressures is a tricky management problem. Multiobjective optimization might provide the answer. This emerging research area generates optimal solutions for projects with many objectives. An experienced decision-maker analyzes these solutions and selects the best one. Here, we describe such an approach and demonstrate it with a problem involving the allocation of software construction tasks among a team of software developers with varying degrees of skill.  相似文献   

13.
We consider a network with unreliable communication channels and perfectly reliable nodes. The diameter constrained reliability for such a network is defined as the probability that between each pair of nodes, there exists a path consisting of operational edges whose number is upper bounded by a given integer. The problem of computing this characteristic is NP-hard, just like the problem of computing the probability of a network’s connectivity. We propose a formula that lets one use junction points to compute the reliability of a two-pole system with diameter constraints, which makes the computations faster.  相似文献   

14.
There are two commonly used analytical reliability analysis methods: linear approximation - first-order reliability method (FORM), and quadratic approximation - second-order reliability method (SORM), of the performance function. The reliability analysis using FORM could be acceptable in accuracy for mildly nonlinear performance functions, whereas the reliability analysis using SORM may be necessary for accuracy of nonlinear and multi-dimensional performance functions. Even though the reliability analysis using SORM may be accurate, it is not as much used for probability of failure calculation since SORM requires the second-order sensitivities. Moreover, the SORM-based inverse reliability analysis is rather difficult to develop.This paper proposes an inverse reliability analysis method that can be used to obtain accurate probability of failure calculation without requiring the second-order sensitivities for reliability-based design optimization (RBDO) of nonlinear and multi-dimensional systems. For the inverse reliability analysis, the most probable point (MPP)-based dimension reduction method (DRM) is developed. Since the FORM-based reliability index (β) is inaccurate for the MPP search of the nonlinear performance function, a three-step computational procedure is proposed to improve accuracy of the inverse reliability analysis: probability of failure calculation using constraint shift, reliability index update, and MPP update. Using the three steps, a new DRM-based MPP is obtained, which estimates the probability of failure of the performance function more accurately than FORM and more efficiently than SORM. The DRM-based MPP is then used for the next design iteration of RBDO to obtain an accurate optimum design even for nonlinear and/or multi-dimensional system. Since the DRM-based RBDO requires more function evaluations, the enriched performance measure approach (PMA+) with new tolerances for constraint activeness and reduced rotation matrix is used to reduce the number of function evaluations.  相似文献   

15.
Since some assumptions such as the function ϕ(·) needs to be completely specified and the relationship between μ and ϕ(s) must have linear behavior in the model μ = a + (S) used in the accelerated life testing analysis, generally do not hold; the estimation of stress level contains uncertainty. In this paper, we propose to use a non-linear fuzzy regression model for performing the extrapolation process and adapting the fuzzy probability theory to the classical reliability including uncertainty and process experience for obtaining fuzzy reliability of a component. Results show, that the proposed model has the ability to estimate reliability when the mentioned assumptions are violated and uncertainty is implicit; so that the classical models are unreliable.  相似文献   

16.
We first introduce Jeffrey’s rule of conditioning and explain how it allows us to determine the probability of an event related to one variable from information about a collection of conditional probabilities of that event conditioned on the state another variable. We note that in the original Jeffrey paradigm we have the uncertainty about the state of the conditioning variable expressed as a probability distribution. Here we extend this by allowing alternative formulations for the uncertainty about the conditioning variable. We first consider the case where our uncertainty is expressed in terms of a measure. This allows us to consider the case where our uncertainty is a possibility distribution. We next consider the case where our uncertainty about the conditioning variable is expressed in terms of a Dempster–Shafer belief structure. Finally we consider the case where we are ignorant about the underlying distribution and must use the decision maker’s subjective attitude about the nature of uncertainty to provide the necessary information to use in the Jeffrey rule.  相似文献   

17.
This paper presents Bayes estimators for the reliability measures of the individual components in a multi-component systems in the presence of masked system life test data. The life time distributions of the system components are assumed to be geometric with different parameters. Two-sided Bayesian probability intervals of the parameters are also derived. Numerical simulation study is given in order to: (i) explain how one can apply the theoretical results obtained, (ii) study the influence of the sample size and masking level on the accuracy of point estimates.  相似文献   

18.
在感知无线云网络中,本文针对感知用户数据传输时发生冲突造成能量浪费问题,给出了一种移动云计算辅助下基于感知数据传输时间优化的终端节能方案。通过云平台强大的计算处理能力对感知用户业务量进行统计分析,并导出了不可靠检测区域;在此基础上,采用感知用户不可靠检测概率为目标函数,使不可靠检测概率最小的同时,计算出感知用户最佳的数据传输时间,并根据业务量对感知用户的传输时间进行自适应调整,实现对感知用户的宏观调控,从而节约系统中感知终端的能耗。仿真结果表明该方案使得感知用户的碰撞概率降低,在提高了感知用户数据传输可靠性,并降低由于数据碰撞造成的终端能耗。  相似文献   

19.
This paper studies the problem of maximizing the number of correct results of dependent tasks computed unreliably. We consider a distributed system composed of a reliable server that coordinates the computation of a massive number of unreliable workers. Any worker computes correctly with probability p < 1. Any incorrectly computed task corrupts all dependent tasks. The goal is to determine which tasks should be computed by the (reliable) server and which by the (unreliable) workers, and when, so as to maximize the expected number of correct results, under a constraint d on the computation time. This problem is motivated by distributed computing applications that persist partial results of computations for future use in other computations and that want to ensure that the persisted results are of high quality. We show that this optimization problem is NP-hard. Then we study optimal scheduling solutions for the mesh with the tightest deadline. We present combinatorial arguments that describe all optimal solutions for two ranges of values of worker reliability p, when p is close to zero and when p is close to one.  相似文献   

20.
In the reliability analysis, input variables as well as the metamodel uncertainties are often encountered in practice. The input uncertainty includes the statistical uncertainty of the distribution parameters due to the lack of knowledge or insufficient data. Metamodel uncertainty arises when the response function is approximated by a surrogate function using a finite number of responses to reduce the costly computations. In this study, a reliability analysis procedure is proposed based on a Bayesian framework that can incorporate these uncertainties in an integrated manner into the form of posterior PDF. The PDF, often expressed by arbitrary functions, is evaluated via Markov Chain Monte Carlo (MCMC) method, which is an efficient simulation method to draw random samples that follow the distribution. In order to avoid the nested computation in the full Bayesian approach, a posterior predictive approach is employed, which requires only a single loop of reliability analysis. Gaussian process model is employed for the metamodel. Mathematical and engineering examples are used to demonstrate the proposed method. In the results, comparing with the full Bayesian approach, the predictive approach provides much less information, i.e., only a point estimate of the probability. Nevertheless, the predictive approach adequately accounts for the uncertainties with much less computation, which is more advantageous in the design practice. The smaller the data are provided, the higher the statistical uncertainty, leading to the higher (or lower) failure probability (or reliability).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号