首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 14 毫秒
1.
Based on a black box model of a complex system, and on intervals and probabilities describing the known information about the inputs, we want to estimate the system's reliability. This problem is motivated by a number of problem areas, most specifically in engineering reliability analysis under conditions of poor measurement and high complexity of system models. Using the results of tests performed on the system's computer model, we can estimate the lower and upper bounds of the probability that the system is in a desirable state. This is equivalent to using Monte-Carlo sampling to estimate cumulative belief and plausibility values of functionally propagated finite random intervals. In this paper, we prove that these estimates are correct in the sense that under reasonable assumptions, these estimates converge to the actual probability bounds.  相似文献   

2.
健壮性测试对软件尤其是安全产品有极其重要的意义,本文使用有限状态机来对软件产品进行建模,通过状态扩充和状态对各种异常事件的响应,构建完备的增强有限状态机,使用该模型可以进行健壮性测试。我们利用该方法对GlobalPlatform卡规范中安全通信的安全通道协议SCP02进行了测试。测试结果表明,该方法产生的测试案例包比GP组织认可的测试包具有更高的覆盖性和错误检测能力,该方法产生的案例不仅可以用于健壮性测试,也可以用于功能性和符合性测试。  相似文献   

3.
Management of interval probabilistic data   总被引:1,自引:0,他引:1  
In this paper we present a data model for uncertain data, where uncertainty is represented using interval probabilities. The theory introduced in the paper can be applied to different specific data models, because the entire approach has been developed independently of the kind of manipulated objects, like XML documents, relational tuples, or other data types. As a consequence, our theory can be used to extend existing data models with the management of uncertainty. In particular, the data model we obtain as an application to XML data is the first proposal that combines XML, interval probabilities and a powerful query algebra with selection, projection, and cross product. The cross product operator is not based on assumptions of independence between XML trees from different collections. Being defined with a possible worlds semantics, our operators are proper extensions of their traditional counterparts, and reduce to them when there is no uncertainty. The main practical result of the paper is a set of equivalences that can be used to compare or rewrite algebraic queries on interval probabilistic data, in particular XML and relational.  相似文献   

4.
5.
In recent years particle filters have been applied to a variety of state estimation problems. A particle filter is a sequential Monte Carlo Bayesian estimator of the posterior density of the state using weighted particles. The efficiency and accuracy of the filter depend mostly on the number of particles used in the estimation and on the propagation function used to re-allocate weights to these particles at each iteration. If the imprecision, i.e. bias and noise, in the available information is high, the number of particles needs to be very large in order to obtain good performances. This may give rise to complexity problems for a real-time implementation. This kind of imprecision can easily be represented by interval data if the maximum error is known. Handling interval data is a new approach successfully applied to different real applications. In this paper, we propose an extension of the particle filter algorithm able to handle interval data and using interval analysis and constraint satisfaction techniques. In standard particle filtering, particles are punctual states associated with weights whose likelihoods are defined by a statistical model of the observation error. In the box particle filter, particles are boxes associated with weights whose likelihood is defined by a bounded model of the observation error. Experiments using actual data for global localization of a vehicle show the usefulness and the efficiency of the proposed approach.  相似文献   

6.
A minimax estimation problem in multidimensional linear regression model containing uncertain parameters and random quantities is considered. Simultaneous distribution of random quantities that are a part of the observation model is not prescribed exactly; however, it has a fixed mean and a covariance matrix from the given set. For estimation algorithm optimization, we applied a minimax approach with the risk measure in the form of the exceedance probability of the estimate of a prescribed level by an error. It was shown that a linear estimation problem is equivalent to the minimax problem with the mean-square criterion. In addition, the corresponding linear estimate will be the best (in the minimax sense) by the probabilistic criterion at the class of all unbiased estimates. The least favorable distribution of random model parameters is also constructed. Several partial cases and a numerical example are considered.  相似文献   

7.
基于ORD和FSM的Web应用的建模与测试   总被引:2,自引:0,他引:2  
Web测试是保证高质量Web应用的一种有效技术.然而,由于其特殊性和复杂性,很难直接将传统的测试理论与方法学运用到Web应用的测试当中来.对Web应用进行了分析与建模,并对其进行测试,提出了一种可行的Web测试模型.首先得到页面流图(PFD,Page Flow Diagram),进而产生对象关系图(ORD,Object Relation Diagram),然后根据提出的算法将ORD转化为形式化的有限状态机(FSM,Finite State Machine)模型.基于FSM模型,提出了一种有效的测试路径自动生成方法,这些测试路径可以转化为XML语法的测试规格说明.测试引擎将测试规格说明作为输入最终产生测试报告.全文以所开发的一个小型的Web应用SWLS(Simple Web Login System)为例进行阐述.  相似文献   

8.
We propose an information filtering system based on a probabilistic model. We make an assumption that a document consists of words which occur according to a probability distribution, and regard a document as a sample drawn according to that distribution. In this article, we adopt a multinomial distribution and represent a document as probability which has random values as the words in the document. When an information filtering system selects information, it uses the similarity between the user's interests (a user profile) and a document. Since our proposed system is constructed under the probabilistic model, the similarity is defined using the Kullback Leibler divergence. To create the user profile, we must optimize the Kullback Leibler divergence. Since the Kullback Leibler divergence is a nonlinear function, we use a genetic algorithm to optimize it. We carry out experiments and confirm effectiveness of the proposed method. This work was presented in part at the 10th International Symposium on Artificial Life and Robotics, Oita, Japan, February 4–6, 2005  相似文献   

9.
This paper presents a generalized variable approach for confidence interval estimation of a common correlation coefficient from several independent samples drawn from bivariate normal populations. This approach can provide one-sided bounds and two-sided confidence intervals with satisfying coverage probabilities regardless of the number of samples, sample sizes and magnitude of the common correlation coefficient while the large sample approach can be very liberal for one-sided bounds. The large sample approach generally performs well for two-sided confidence interval estimation.  相似文献   

10.
In this paper, the problem of observer-based control is investigated for networked control systems (NCSs) with probabilistic interval distribution communication delays. In the presence of network-induced delays and packet dropouts in the sensor-to-controller and controller-to-actuator connections, a new model for an observer-based NCS is established, which is then converted to an equivalent linear system with two random time-varying delays satisfying Bernoulli distributions. By utilizing information concerning the probability distribution of the delays and by adopting a tighter bounding technique, less conservative asymptotic mean-square stable sufficient conditions are derived in the form of linear matrix inequalities (LMIs). Numerical examples are provided to demonstrate the effectiveness of the proposed method.  相似文献   

11.
考虑了对未知参数θ的多传感器分布式区间估计融合问题. 建立了一种最优区间估计融合模型———凸线性组合融合, 并给出搜索最优权系数的Gauss Seidel迭代算法, 另外, 给出了一种近似的区间估计融合, 它能减少大量的计算量, 并且在某些情况下可以达到最优的估计性能. 最后采用计算机数值模拟, 用以上方法得到的融合区间估计均优于每个传感器的区间估计的性能.  相似文献   

12.
Xinjia Chen  Kemin Zhou 《Automatica》2004,40(10):1787-1789
The Clopper-Pearson confidence interval has ever been documented as an exact approach in some statistics literature. More recently, such approach of interval estimation has been introduced to probabilistic control theory and has been referred as non-conservative in control community. In this note, we clarify the fact that the so-called exact approach is actually conservative. In particular, we derive analytic results demonstrating the extent of conservatism in the context of probabilistic robustness analysis. This investigation encourages seeking better methods of confidence interval construction for robust control purpose.  相似文献   

13.
14.
概率符号有向图(probabilistic signed digraph,PSDG)模型通过在传统定性符号有向图(signed digraph,SDG)的模型结构中引入节点和支路的概率信息,改善了传统定性SDG故障诊断的性能,提高了故障诊断的分辨率.然而,在PSDG模型中,节点的概率分布通常是在给定其父节点条件下的条件...  相似文献   

15.
In comparative studies of rare events, fixing group sizes may result in groups with zero events. To overcome this difficulty, one may adopt an inverse sampling design which fixes the number of events, resulting in random variables following the negative binomial distribution. This article presents a new approach to setting confidence intervals for effect measures under inverse sampling, using the variance estimates recovered from exact confidence limits for single negative binomial proportions. Exact numerical evaluation results demonstrate that the proposed procedure performs well.  相似文献   

16.
In this paper we consider the beta regression model recently proposed by Ferrari and Cribari-Neto [2004. Beta regression for modeling rates and proportions. J. Appl. Statist. 31, 799-815], which is tailored to situations where the response is restricted to the standard unit interval and the regression structure involves regressors and unknown parameters. We derive the second order biases of the maximum likelihood estimators and use them to define bias-adjusted estimators. As an alternative to the two analytically bias-corrected estimators discussed, we consider a bias correction mechanism based on the parametric bootstrap. The numerical evidence favors the bootstrap-based estimator and also one of the analytically corrected estimators. Several different strategies for interval estimation are also proposed. We present an empirical application.  相似文献   

17.
Vessel structures such as retinal vasculature are important features for computer-aided diagnosis. In this paper, a probabilistic tracking method is proposed to detect blood vessels in retinal images. During the tracking process, vessel edge points are detected iteratively using local grey level statistics and vessel's continuity properties. At a given step, a statistic sampling scheme is adopted to select a number of vessel edge points candidates in a local studying area. Local vessel's sectional intensity profiles are estimated by a Gaussian shaped curve. A Bayesian method with the Maximum a posteriori (MAP) probability criterion is then used to identify local vessel's structure and find out the edge points from these candidates. Evaluation is performed on both simulated vascular and real retinal images. Different geometric shapes and noise levels are used for computer simulated images, whereas real retinal images from the REVIEW database are tested. Evaluation performance is done using the Segmentation Matching Factor (SMF) as a quality parameter. Our approach performed better when comparing it with Sun's and Chaudhuri's methods. ROC curves are also plotted, showing effective detection of retinal blood vessels (true positive rate) with less false detection (false positive rate) than Sun's method.  相似文献   

18.
19.
A Bayesian approach to estimate selection probabilities of probabilistic Boolean networks is developed in this study. The concepts of inverse Boolean function and updatable set are introduced to specify states which can be used to update a Bayesian posterior distribution. The analysis on convergence of the posteriors is carried out by exploiting the combination of semi‐tensor product technique and state decomposition algorithm for Markov chain. Finally, some numerical examples demonstrate the proposed estimation algorithm.  相似文献   

20.
In D’Ariano in Philosophy of Quantum Information and Entanglement, Cambridge University Press, Cambridge, UK (2010), one of the authors proposed a set of operational postulates to be considered for axiomatizing Quantum Theory. The underlying idea is to derive Quantum Theory as the mathematical representation of a fair operational framework, i.e. a set of rules which allows the experimenter to make predictions on future events on the basis of suitable tests, e.g. without interference from uncontrollable sources and having local control and low experimental complexity. In addition to causality, two main postulates have been considered: PFAITH (existence of a pure preparationally faithful state), and FAITHE (existence of a faithful effect). These postulates have exhibited an unexpected theoretical power, excluding all known nonquantum probabilistic theories. In the same paper also postulate PURIFY-1 (purifiability of all states) has been introduced, which later has been reconsidered in the stronger version PURIFY-2 (purifiability of all states unique up to reversible channels on the purifying system) in Chiribella et al. (Reversible realization of physical processes in probabilistic theories, arXiv:0908.1583). There, it has been shown that Postulate PURIFY-2, along with causality and local discriminability, narrow the probabilistic theory to something very close to the quantum one. In the present paper we test the above postulates on some nonquantum probabilistic models. The first model—the two-box world—is an extension of the Popescu–Rohrlich model (Found Phys, 24:379, 1994), which achieves the greatest violation of the CHSH inequality compatible with the no-signaling principle. The second model—the two-clock world— is actually a full class of models, all having a disk as convex set of states for the local system. One of them corresponds to—the two-rebit world— namely qubits with real Hilbert space. The third model—the spin-factor—is a sort of n-dimensional generalization of the clock. Finally the last model is the classical probabilistic theory. We see how each model violates some of the proposed postulates, when and how teleportation can be achieved, and we analyze other interesting connections between these postulate violations, along with deep relations between the local and the non-local structures of the probabilistic theory.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号