首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
条件事件代数研究综述   总被引:9,自引:0,他引:9  
邓勇  刘琪  施文康 《计算机学报》2003,26(6):650-661
综述了条件事件代数理论的原理、主要性质和应用.条件事件代数是一门新兴的解决不确定性、概率性和模糊性推理问题的学科,是在确保规则概率与条件概率相容的前提下,把布尔代数上的逻辑运算推广到条件事件(规则)集合中得到的代数系统,目的是为智能系统中的条件推理建立一个数学基础.该文也对比条件事件代数更一般的逻辑系统——关联事件代数理论进行了介绍.  相似文献   

2.
3.
Cross impact analysis (CIA) consists of a set of related methodologies that predict the occurrence probability of a specific event and that also predict the conditional probability of a first event given a second event. The conditional probability can be interpreted as the impact of the second event on the first. Most of the CIA methodologies are qualitative that means the occurrence and conditional probabilities are calculated based on estimations of human experts. In recent years, an increased number of quantitative methodologies can be seen that use a large number of data from databases and the internet. Nearly 80% of all data available in the internet are textual information and thus, knowledge structure based approaches on textual information for calculating the conditional probabilities are proposed in literature. In contrast to related methodologies, this work proposes a new quantitative CIA methodology to predict the conditional probability based on the semantic structure of given textual information. Latent semantic indexing is used to identify the hidden semantic patterns standing behind an event and to calculate the impact of the patterns on other semantic textual patterns representing a different event. This enables to calculate the conditional probabilities semantically. A case study shows that this semantic approach can be used to predict the conditional probability of a technology on a different technology.  相似文献   

4.
The two-parameter linear failure rate distribution has been used quite successfully to analyze lifetime data. Recently, a new three-parameter distribution, known as the generalized linear failure rate distribution, has been introduced by exponentiating the linear failure rate distribution. The generalized linear failure rate distribution is a very flexible lifetime distribution, and the probability density function of the generalized linear failure rate distribution can take different shapes. Its hazard function also can be increasing, decreasing and bathtub shaped. The main aim of this paper is to introduce a bivariate generalized linear failure rate distribution, whose marginals are generalized linear failure rate distributions. It is obtained using the same approach as was adopted to obtain the Marshall-Olkin bivariate exponential distribution. Different properties of this new distribution are established. The bivariate generalized linear failure rate distribution has five parameters and the maximum likelihood estimators are obtained using the EM algorithm. A data set is analyzed for illustrative purposes. Finally, some generalizations to the multivariate case are proposed.  相似文献   

5.
The internet is a valuable source of information where many ideas can be found dealing with different topics. A few numbers of ideas might be able to solve an existing problem. However, it is time-consuming to identify these ideas within the large amount of textual information in the internet. This paper introduces a new web mining approach that enables an automated identification of new technological ideas extracted from internet sources that are able to solve a given problem. It adapts and combines several existing approaches from literature: approaches that extract new technological ideas from a user given text, approaches that investigate the different idea characteristics in different technical domains, and multi-language web mining approaches. In contrast to previous work, the proposed approach enables the identification of problem solution ideas in the internet considering domain dependencies and language aspects. In a case study, new ideas are identified to solve existing technological problems as occurred in research and development (R&D) projects. This supports the process of research planning and technology development.  相似文献   

6.
Probability distributions have been in use for modeling of random phenomenon in various areas of life. Generalization of probability distributions has been the area of interest of several authors in the recent years. Several situations arise where joint modeling of two random phenomenon is required. In such cases the bivariate distributions are needed. Development of the bivariate distributions necessitates certain conditions, in a field where few work has been performed. This paper deals with a bivariate beta-inverse Weibull distribution. The marginal and conditional distributions from the proposed distribution have been obtained. Expansions for the joint and conditional density functions for the proposed distribution have been obtained. The properties, including product, marginal and conditional moments, joint moment generating function and joint hazard rate function of the proposed bivariate distribution have been studied. Numerical study for the dependence function has been implemented to see the effect of various parameters on the dependence of variables. Estimation of the parameters of the proposed bivariate distribution has been done by using the maximum likelihood method of estimation. Simulation and real data application of the distribution are presented.  相似文献   

7.
基于互信息约束聚类的图像语义标注   总被引:2,自引:0,他引:2       下载免费PDF全文
提出一种基于互信息约束聚类的图像标注算法。采用语义约束对信息瓶颈算法进行改进,并用改进的信息瓶颈算法对分割后的图像区域进行聚类,建立图像语义概念和聚类区域之间的相互关系;对未标注的图像,提出一种计算语义概念的条件概率的方法,同时考虑训练图像的先验知识和区域的低层特征,最后使用条件概率最大的语义关键字对图像区域语义自动标注。对一个包含500幅图像的图像库进行实验,结果表明,该方法比其他方法更有效。  相似文献   

8.
用概率性分析方法 ,研究了在结点错误概率性分布的情形下超立方体网络点对点容错路由算法的路径长度 ,得出了算法的路径长度期望值 ,分析表明 :对于结点错误概率 p≤ 10时 ,源点 U到终点 V所在的 k维子立方体的路径长度期望值不超过 1.11* h,比以往通常的长度分析结果 2 * h小得多 .提出一种改进的算法并证明这一新算法所构造的路径长度的期望值不大于 1.11h- 0 .11k 2 ,这大大改进了以前的路径 2 h k 2 ,其中 h为 U与 V的 Ham ming距离 .  相似文献   

9.
从可观测的变量中推导出潜在的因果关系是人工智能领域的热点研究之一。传统的基于独立性检测的方法是通过检测V结构来确定一组马尔科夫等价类而非最终的因果关系;而加噪声模型算法却只能适应于低维度的因果网络结构。为此,提出一种采取分治策略的混合加噪声模型与条件独立性检测的因果方向推断方法。首先是将一个n维因果网络分解成n个诱导子网络,分别归入三种基本结构(单度结构、非三角结构和存在三角的结构)中的一种,从理论上分别证明其有效性;其次对每个诱导子网络进行基于加噪声模型算法与条件独立性检测相结合的方向推断;最后把所有子网络合并起来构建成完整的因果关系网络。实验表明,该方法比传统的因果关系推断方法更加有效。  相似文献   

10.
邓松  万常选 《软件学报》2017,28(12):3241-3256
在深网数据集成过程中,用户希望仅检索少量数据源便能获取高质量的检索结果,因而数据源选择成为其核心技术.为满足基于相关性和多样性的集成检索需求,提出一种适合小规模抽样文档摘要的深网数据源选择方法.该方法在数据源选择过程中首先度量数据源与用户查询的相关性,然后进一步考虑候选数据源提供数据的多样性.为提升数据源相关性判别的准确性,构建了基于层次主题的数据源摘要,并在其中引入了主题内容相关性偏差概率模型,且给出了基于人工反馈的偏差概率模型构建方法以及基于概率分析的数据源相关性度量方法.为提升数据源选择结果的多样性程度,在基于层次主题的数据源摘要中建立了多样性链接有向边,并给出了数据源多样性的评价方法.最后,将基于相关性和多样性的数据源选择问题转化为一个组合优化问题,提出了基于优化函数的数据源选择策略.实验结果表明:在基于少量抽样文档进行数据源选择时,该方法具有较高的选择准确率.  相似文献   

11.
A conventional neural network approach to regression problems approximates the conditional mean of the output vector. For mappings which are multi-valued this approach breaks down, since the average of two solutions is not necessarily a valid solution. In this article mixture density networks, a principled method for modelling conditional probability density functions, are applied to retrieving Cartesian wind vector components from satellite scatterometer data. A hybrid mixture density network is implemented to incorporate prior knowledge of the predominantly bimodal function branches. An advantage of a fully probabilistic model is that more sophisticated and principled methods can be used to resolve ambiguities.  相似文献   

12.
一种耦合的活动轮廓模型及其在图像分割中的应用   总被引:1,自引:0,他引:1       下载免费PDF全文
本文对活动轮廓模型的外部能量项进行改进,针对灰度图像分割提出了一种新的自适应图像分割模型,并将它推广,建立了矢量图像分割模型。新模型耦合了快速边缘积分方法和简化统计方法,充分考虑到图像区域和边缘的先验信息,可根据不同的条件概率密度函数构造不同图像分割模型。文中还基于高斯型概率密度函数建立分割模型实例,结合应用高效且无条件稳定的AOS算法分别对灰度图像和矢量图像(RGB)进行分割实验,并将本文提出的方法与经典的快速边缘积分方法进行比较,结果表明本文的分割方法准确性较高,且具有良好的抗噪性,是行之有效的。  相似文献   

13.
《Information Sciences》1987,41(2):139-169
The generalized theory of marginal certainty and information measures, as introduced by Van der Lubbe et al. [31], is extended to the conditional and joint cases. Bivariate information measures are introduced with the help of the conditional and joint certainty measures, analogously to the manner in which marginal information measures are derived from marginal certainty measures. This approach leads to new conditional and joint information measures for the well-known marginal measures. Furthermore, this approach unifies the already known bivariate measures with the marginal ones in one generalized probabilistic theory of information measures.  相似文献   

14.
Blind source separation (BSS) has attained much attention in signal processing society due to its ‘blind’ property and wide applications. However, there are still some open problems, such as underdetermined BSS, noise BSS. In this paper, we propose a Bayesian approach to improve the separation performance of instantaneous mixtures with non-stationary sources by taking into account the internal organization of the non-stationary sources. Gaussian mixture model (GMM) is used to model the distribution of source signals and the continuous density hidden Markov model (CDHMM) is derived to track the non-stationarity inside the source signals. Source signals can switch between several states such that the separation performance can be significantly improved. An expectation-maximization (EM) algorithm is derived to estimate the mixing coefficients, the CDHMM parameters and the noise covariance. The source signals are recovered via maximum a posteriori (MAP) approach. To ensure the convergence of the proposed algorithm, the proper prior densities, conjugate prior densities, are assigned to estimation coefficients for incorporating the prior information. The initialization scheme for the estimates is also discussed. Systematic simulations are used to illustrate the performance of the proposed algorithm. Simulation results show that the proposed algorithm has more robust separation performance in terms of similarity score in noise environments in comparison with the classical BSS algorithms in determined mixture case. Additionally, since the mixing matrix and the sources are estimated jointly, the proposed EM algorithm also works well in underdetermined case. Furthermore, the proposed algorithm converges quickly with proper initialization.  相似文献   

15.
深网查询在Web上众多的应用,需要查询大量的数据源才能获得足够的数据,如多媒体数据搜索、团购网站信息聚合等.应用的成功,取决于查询多数据源的效率和效果.当前研究侧重查询与数据源的相关性而忽略数据源之间的重叠关系,使得不同数据源上相同结果的数据被重复查询,增加了查询开销及数据源的工作负载.为了提高深网查询的效率,提出一种元组水平的分层抽样方法来估计和利用查询在数据源上的统计数据,选择高相关、低重叠的数据源.该方法分为两个阶段:离线阶段,基于元组水平对数据源进行分层抽样,获得样本数据;在线阶段,基于样本数据迭代地估计查询在数据源上的覆盖率和重叠率,并采用一种启发式策略以高效地发现低重叠的数据源.实验结果表明,该方法能够显著提高重叠数据源选择的精度和效率.  相似文献   

16.
In many detection problems, the structures to be detected are parameterized by the points of a parameter space. If the conditional probability density function for the measurements is known, then detection can be achieved by sampling the parameter space at a finite number of points and checking each point to see if the corresponding structure is supported by the data. The number of samples and the distances between neighboring samples are calculated using the Rao metric on the parameter space. The Rao metric is obtained from the Fisher information which is, in turn, obtained from the conditional probability density function. An upper bound is obtained for the probability of a false detection. The calculations are simplified in the low noise case by making an asymptotic approximation to the Fisher information. An application to line detection is described. Expressions are obtained for the asymptotic approximation to the Fisher information, the volume of the parameter space, and the number of samples. The time complexity for line detection is estimated. An experimental comparison is made with a Hough transform-based method for detecting lines.  相似文献   

17.
The bivariate distributions are useful in simultaneous modeling of two random variables. These distributions provide a way to model models. The bivariate families of distributions are not much widely explored and in this article a new family of bivariate distributions is proposed. The new family will extend the univariate transmuted family of distributions and will be helpful in modeling complex joint phenomenon. Statistical properties of the new family of distributions are explored which include marginal and conditional distributions, conditional moments, product and ratio moments, bivariate reliability and bivariate hazard rate functions. The maximum likelihood estimation (MLE) for parameters of the family is also carried out. The proposed bivariate family of distributions is studied for the Weibull baseline distributions giving rise to bivariate transmuted Weibull (BTW) distribution. The new bivariate transmuted Weibull distribution is explored in detail. Statistical properties of the new BTW distribution are studied which include the marginal and conditional distributions, product, ratio and conditional momenst. The hazard rate function of the BTW distribution is obtained. Parameter estimation of the BTW distribution is also done. Finally, real data application of the BTW distribution is given. It is observed that the proposed BTW distribution is a suitable fit for the data used.  相似文献   

18.
This paper presents an algorithm for estimating the parameters of multicomponent chirp signals. The estimator is based on the cubic phase function (CPF), which is efficient to estimate the parameters of monocomponent first-, second-, and third-order polynomial phase signal. When the CPF is dealing with multicomponent chirp signals, the spurious peaks arise and hence the identifiability problem exists. A new approach based on the product cubic phase function (PCPF) is proposed to remove the identifiability problem. The occurrence probability of spurious peak, and the effect of noise to the estimator are statistically studied. The simulation examples are provided for validating the theoretical analysis.  相似文献   

19.
针对传统犯罪案件中出现的冲突证据难以处理的情况,提出一种基于证据可信度和灰色关联的冲突证据融合处理方法;基于相似性视角的灰色关联度作为证据间的联系,采用改变证据源的证据组合规则来衡量证据源中各个证据之间的贴近度;考虑到证据的可信度,提出新的基于证据贴近度的权重确定方法,以证据的可信度作为权重,对参与融合的证据的基本概率分配函数进行加权平均,使证据融合收敛速度更快,并提升融合效果.最后以安徽省某市的入室盗窃犯罪案件为例,运用基于信息融合的关联证据推理方法处理案件中的冲突证据问题,验证了所提出方法的合理性和有效性.  相似文献   

20.
Some propositions add more information to bodies of propositions than do others. We start with intuitive considerations on qualitative comparisons of information added. Central to these are considerations bearing on conjunctions and on negations. We find that we can discern two distinct, incompatible, notions of information added. From the comparative notions we pass to quantitative measurement of information added. In this we borrow heavily from the literature on quantitative representations of qualitative, comparative conditional probability. We look at two ways to obtain a quantitative conception of information added. One, the most direct, mirrors Bernard Koopman’s construction of conditional probability: by making a strong structural assumption, it leads to a measure that is, transparently, some function of a function P which is, formally, an assignment of conditional probability (in fact, a Popper function). P reverses the information added order and mislocates the natural zero of the scale so some transformation of this scale is needed but the derivation of P falls out so readily that no particular transformation suggests itself. The Cox–Good–Aczél method assumes the existence of a quantitative measure matching the qualitative relation, and builds on the structural constraints to obtain a measure of information that can be rescaled as, formally, an assignment of conditional probability. A classical result of Cantor’s, subsequently strengthened by Debreu, goes some way towards justifying the assumption of the existence of a quantitative scale. What the two approaches give us is a pointer towards a novel interpretation of probability as a rescaling of a measure of information added.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号