首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper investigates Walley's concepts of epistemic irrelevance and epistemic independence for imprecise probability models. We study the mathematical properties of irrelevance and independence, and their relation to the graphoid axioms. Examples are given to show that epistemic irrelevance can violate the symmetry, contraction and intersection axioms, that epistemic independence can violate contraction and intersection, and that this accords with informal notions of irrelevance and independence.  相似文献   

2.
This paper investigates the concept of strong conditional independence for sets of probability measures. Couso, Moral and Walley [7] have studied different possible definitions for unconditional independence in imprecise probabilities. Two of them were considered as more relevant: epistemic independence and strong independence. In this paper, we show that strong independence can have several extensions to the case in which a conditioning to the value of additional variables is considered. We will introduce simple examples in order to make clear their differences. We also give a characterization of strong independence and study the verification of semigraphoid axioms.  相似文献   

3.
We present a computational model which predicts people’s switching behaviour in repeated gambling scenarios such as the Iowa Gambling Task. This Utility-Caution model suggests that people’s tendency to switch away from an option is due to a utility factor which reflects the probability and the amount of losses experienced compared to gains, and a caution factor which describes the number of choices made consecutively in that option. Using a novel next-choice-prediction method, the Utility-Caution model was tested using two sets of data on the performance of participants in the Iowa Gambling Task. The model produced significantly more accurate predictions of people’s choices than the previous Bayesian expected-utility model and expectancy-valence model.  相似文献   

4.
分析研究了入侵攻击的防火墙无关性因素,对安全防护策略的制订提供了方向性的指导,并将不可预知的风险因素转换为一定的可预见性因素,从而提供安全部署策略性的参考方案。  相似文献   

5.
描述逻辑是语义Web的逻辑基础,它是形式化表达领域知识的一种工具.描述逻辑是一阶逻辑的可判定子集,适合对领域知识的概念术语进行建模.因为某些应用程序的需要和领域知识难以完全描述的因素,Web上有大量的不完全知识.描述逻辑基于开放世界假设,只能表达单调推理,不能处理不完全知识.在描述逻辑中加入认知运算符K可以得到认知描述逻辑.认知描述逻辑因其非单调特性和良好的时间复杂度等特点在处理不完全知识方面有较好的优势.在认知描述逻辑ALCK的基础上加入传递关系属性提出了新的认知描述逻辑语言ALCKR+,保留了描述逻辑原有的优点,增强了表达能力并通过认知查询拥有了非单调推理的能力.设计了ALCKR+的语法、语义以及表算法,给出了表算法的正确性以及可判定性证明,证明表算法的时间复杂度为PSPACE-tomplete.  相似文献   

6.
This paper is concerned with the reliable inference of optimal tree-approximations to the dependency structure of an unknown distribution generating data. The traditional approach to the problem measures the dependency strength between random variables by the index called mutual information. In this paper reliability is achieved by Walley's imprecise Dirichlet model, which generalizes Bayesian learning with Dirichlet priors. Adopting the imprecise Dirichlet model results in posterior interval expectation for mutual information, and in a set of plausible trees consistent with the data. Reliable inference about the actual tree is achieved by focusing on the substructure common to all the plausible trees. We develop an exact algorithm that infers the substructure in time O(m 4), m being the number of random variables. The new algorithm is applied to a set of data sampled from a known distribution. The method is shown to reliably infer edges of the actual tree even when the data are very scarce, unlike the traditional approach. Finally, we provide lower and upper credibility limits for mutual information under the imprecise Dirichlet model. These enable the previous developments to be extended to a full inferential method for trees.  相似文献   

7.
When reasoning about complex domains, where information available is usually only partial, nonmonotonic reasoning can be an important tool. One of the formalisms introduced in this area is Reiter's Default Logic (1980). A characteristic of this formalism is that the applicability of default (inference) rules can only be verified in the future of the reasoning process. We describe an interpretation of default logic in temporal epistemic logic which makes this characteristic explicit. It is shown that this interpretation yields a semantics for default logic based on temporal epistemic models. A comparison between the various semantics for default logic will show the differences and similarities of these approaches and ours.  相似文献   

8.
A general way of representing incomplete information is to use closed and convex sets of probability distributions, which are also called credal sets. Each credal set is associated with uncertainty, whose amount is quantified by an appropriate uncertainty measure. One of the requisite properties of uncertainty measures is the property of additivity, which is associated with the concept of independence. For credal sets, the concept of independence is not unique. This means that different definitions of independence lead to different definitions of additivity for uncertainty measures. In this paper, we compare the various definitions of independence, but our principal aim is to analyze those definitions that are employed in the most significant uncertainty measures established in the literature for credal sets.  相似文献   

9.
Random relations are random sets defined on a two-dimensional space (or higher). After defining the correlation for two variables constrained by a random relation as an interval, the effect of imprecision was studied by using a multi-valued mapping, whose domain is a space of joint random variables. This perspective led to the notions of consistent and non-consistent marginals, which parallel those of epistemic independence, and unknown interaction and epistemic independence for random sets, respectively. The calculation of the correlation bounds entails solving two optimisation problems that are NP-hard. When the entire random relation is available, it is shown that the hypothesis of non-consistent marginals leads to correlation bounds that are much larger (four orders of magnitude in some cases) than those obtained under the hypothesis of consistent marginals; this hierarchy parallels the hierarchy between probability bounds for unknown interaction and strong independence, respectively. Solutions of the optimisation problems were found at the extremes of their feasible intervals in 80–100% of the cases when non-consistent marginals were assumed, but this range became 75–84% when consistent marginals were assumed. When only the marginals are available, there is a complete loss of knowledge in the correlation, and the correlation interval is nearly vacuous or vacuous (i.e. [ ? 1,1]) even if the measurements are sufficiently accurate in which their narrowed intervals do not overlap. Solutions to the optimisation problems were found at the extremes of their feasible intervals 50% or less of the times.  相似文献   

10.
Belief Revision by Sets of Sentences   总被引:9,自引:0,他引:9       下载免费PDF全文
The aim of this paper is to extend the system of belief revision developed by Alchourron,Gaerdenfors and Makinson(AGM)to a more general framework.This extension enables a treatment of revision not only by single sentences but also by any sets of entences,especially by infinite sets.The extended revision and contraction operators will be called general ones,respectively.A group of postulates for each operator is provided in such a way that it coincides with AGM‘s in the limit case.A notion of the nice-ordering partition is introduced to characterize the general contraction opeation.A computation-oriented approach is provided for belief revision operations.  相似文献   

11.
12.
The naive Bayes classifier is known to obtain good results with a simple procedure. The method is based on the independence of the attribute variables given the variable to be classified. In real databases, where this hypothesis is not verified, this classifier continues to give good results. In order to improve the accuracy of the method, various works have been carried out in an attempt to reconstruct the set of the attributes and to join them so that there is independence between the new sets although the elements within each set are dependent. These methods are included in the ones known as semi-naive Bayes classifiers. In this article, we present an application of uncertainty measures on closed and convex sets of probability distributions, also called credal sets, in classification. We represent the information obtained from a database by a set of probability intervals (a credal set) via the imprecise Dirichlet model and we use uncertainty measures on credal sets in order to reconstruct the set of attributes, such as those mentioned, which shall enable us to improve the result of the naive Bayes classifier in a satisfactory way.  相似文献   

13.
Data analysis techniques can be applied to discover important relations among features. This is the main objective of the Information Root Node Variation (IRNV) technique, a new method to extract knowledge from data via decision trees. The decision trees used by the original method were built using classic split criteria. The performance of new split criteria based on imprecise probabilities and uncertainty measures, called credal split criteria, differs significantly from the performance obtained using the classic criteria. This paper extends the IRNV method using two credal split criteria: one based on a mathematical parametric model, and other one based on a non-parametric model. The performance of the method is analyzed using a case study of traffic accident data to identify patterns related to the severity of an accident. We found that a larger number of rules is generated, significantly supplementing the information obtained using the classic split criteria.  相似文献   

14.
现实应用中,计算机处理的数据往往是非精确的。对于非精确的输入数据,一般使用线段,圆和正方形等模型表示。对以平行线段代表非精确数据的模型研究非常重要,因为这种非精确数据模型是解决其他更复杂模型的基础[1]。loffler等[1]给出了一种算法,可以在时间On3)内求出以竖直平行线段表示的非精确数据的最大面积凸包。但是该算法对于任何输入数据计算量都是一样,而现实生活中的非精确数据往往不是完全没有规律的,比如来自同一设备采样的数据的误差范围是一致的。首先给出了一种新的算法,可以在Onlog(n))时间内求出具有相同取值范围的非精确数据的最大面积凸包,同时研究了输入数据是n个非精确数据和m个退化为精确数据的非精确数据如何求最大面积凸包的问题。如果把这些已经退化的非精确数据仍然看作非精确数据,套用文献[1]的算法时间复杂度将会是O((n+m3)。针对这种情况给出了一种算法,算法时间复杂度为On3+nm)。  相似文献   

15.
为评估一类不平衡Feistel密码的安全性能,通过列举的方法,对该密码抵抗差分密码分析和线性密码分析的能力进行了深入的研究。在轮函数是双射的假设条件下,证明了3,4,6,8,10,2rr≥3)轮密码分别至少有1,1,3,4,5,r个轮函数的输入差分非零,从而若设轮函数的最大差分和线性逼近的概率分别为pq,则2rr≥3)轮密码的差分特征和线性特征的概率分别以pr和qr为上界。  相似文献   

16.
针对当前决策树算法较少考虑训练集的嘈杂程度对模型的影响,以及传统驻留内存算法处理海量数据困难的问题,提出一种基于Hadoop平台的不确定概率C4.5算法--IP-C4.5算法。在训练模型时,IP-C4.5算法认为用于建树的训练集是不可靠的,通过用基于不确定概率的信息增益率作为分裂属性选择标准,减小了训练集的嘈杂性对模型的影响。在Hadoop平台下,通过将IP-C4.5算法以文件分裂的方式进行MapReduce化程序设计,增强了处理海量数据的能力。与C4.5和完全信条树(CCDT)算法的对比实验结果表明,在训练集数据是嘈杂的情况下,IP-C4.5算法的准确率相对更高,尤其当数据嘈杂度大于10%时,表现更加优秀;并且基于Hadoop的并行化的IP-C4.5算法具有处理海量数据的能力。  相似文献   

17.
要本文提出了Bayesian同的独立性推广模型。Bayesian同能够表示变量之间概率影响关系与条件独立性,但不能表示因果独立性。虽然Noisy OR模型能够较好地表示变量之问的因果独立性,但该模型又因只能表示因果独立性而具有很大的局限性。本文提出的独立性推广模型解决了Bayesian同因果独立性表示能力不足的问题,扩展了Bayesian同与Noisy OR模型的表示范围,同时简化了Bayesian同的条件概率表,并且新模型更能够反映变量之间的概率影响关系。实验结果表明了该模型的实用性。  相似文献   

18.
为了揭示Vague集和经典集之间的联系,在Vague集二元截集的基础上,提出了两个新的Vague集的分解定理。Vague集的分解定理表明Vague集可以由它分解出的截集簇来表示,而这些截集簇都是普通集。最后实例说明了其有效性。  相似文献   

19.
基于模糊集截集的模糊粗糙集模型   总被引:1,自引:0,他引:1       下载免费PDF全文
基于L.A.Zadeh模糊集的截集的概念给出了论域U上任意模糊子集的上、下近似的刻画,得到了基于模糊集的截集的粗糙集模型,亦即模糊粗糙集,实现了用论域U中的模糊集近似论域上的任意模糊集,进一步推广了Z.Pawlak粗糙集模型,扩展了粗糙集的应用范围。最后,研究了其基本性质以及其与其他粗糙集模型的关系。  相似文献   

20.
不确定环境下MAS生成协作策略的复杂度关系到协作任务能否成功实现.为降低马尔可夫决策模型生成MAS协作策略的复杂度,减少协作通信量,改进了可分解MDP模型生成策略树的方法.利用Bayesian网络中agent状态之间存在的条件独立性与上下文独立性,分解并优化SPI算法生成的策略树,使得MAS中处于独立状态的agent可以分布独立运行,只有在需要同其他agent协商时才进行通信.通信时采用端对端的方式,agent不仅知道协商内容、协商时机,而且知道协作的目标.实验表明,采用该协作策略MAS在完成协作任务获得目标奖励的同时可以有效降低通信量.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号