首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 187 毫秒
1.
程强  陈峰  董建武  徐文立 《自动化学报》2012,(11):1721-1734
概率图模型将图论和概率论相结合,为多个变量之间复杂依赖关系的表示提供了统一的框架,在计算机视觉、自然语言处理和计算生物学等领域有着广泛的应用.概率推理(包括计算边缘概率和计算最大概率状态等问题)是概率图模型研究及应用的核心问题.本文主要介绍概率图模型近似推理方法中变分推理的最新研究成果.在变分近似推理的框架下,系统地归纳了概率图模型推理问题的基本研究思路,综述了目前主要的近似推理方法,并分析了近似算法的单调性、收敛性和全局性等性质.最后,对概率图模型近似推理方法的研究方向和应用前景作了展望.  相似文献   

2.
概率图模型推理方法的研究进展   总被引:1,自引:0,他引:1  
近年来概率图模型已成为不确定性推理的研究热点,在人工智能、机器学习与计算机视觉等领域有广阔的应用前景.根据网络结构与查询问题类型的不同,系统地综述了概率图模型的推理算法.首先讨论了贝叶斯网络与马尔可夫网络中解决概率查询问题的精确推理算法与近似推理算法,其中主要介绍精确推理中的VE算法、递归约束算法和团树算法,以及近似推理中的变分近似推理和抽样近似推理算法,并给出了解决MAP查询问题的常用推理算法;然后分别针对混合网络的连续与混合情况阐述其推理算法,并分析了暂态网络的精确推理、近似推理以及混合情况下的推理;最后指出了概率图模型推理方法未来的研究方向.  相似文献   

3.
高斯马尔可夫随机场模型是具有马尔可夫性质、符合多元高斯分布的概率模型.均值场变分方法是图模型最基本的变分近似推理方法.基于指数族变分近似推理框架,分析了高斯马尔可夫随机场模型均值场变分近似推理的收敛性和精确性,证明了均值场变分近似推理关于一阶均值参数是收敛的.进一步给出了模型的各个变量不完全独立时,对数配分函数的最优下界和迭代误差的解析式.最后,通过数值模拟实验,验证了理论分析的结果.  相似文献   

4.
贝叶斯推理是统计学中的主要问题之一,旨在根据观测数据更新概率分布模型的先验知识。对于真实情况下常遇到的无法观测或难以直接计算的后验概率,贝叶斯推理可以对其进行近似,它是一种以贝叶斯定理为基础的重要方法。在许多机器学习问题中都涉及对包含各类特征数据的真实分布进行模拟和近似的过程,如分类模型、主题建模和数据挖掘等,因此贝叶斯推理在当今机器学习领域里具有重要而独特的研究价值。随着大数据时代的开始,研究者经由实际信息采集到海量的实验数据,导致需要模拟和计算的目标分布也非常复杂,如何在复杂数据下对目标分布进行结果精确和时间高效的近似推理,成为了当今贝叶斯推理问题的重难点。针对这一复杂分布模型下的推理问题,文中对近年来解决贝叶斯推理问题的两大主要方法——变分推理和采样方法,进行系统性地介绍和综述。首先,给出变分推理的问题定义与理论知识,详细介绍以坐标上升为基础的变分推理算法,并给出这一方法的已有应用与未来展望。然后,对国内外现有的采样方法的研究成果进行综述,给出各类主要采样方法的具体算法流程,并总结和对比这些方法的特性与优缺点。最后,引入并行回火技术,对其基本理论和方法进行概述,探讨并行回火与采样...  相似文献   

5.
高斯Markov随机场是具有Markov性质、符合多元高斯分布的概率模型. 高斯均值场是高斯Markov随机场模型上一种基本的变分推理方法,该方法通过引入基于变量簇分解的自由分布进行变分转换,计算出目标函数的下界. 自由分布结构选择是变分推理的重要步骤,也是折中变分精度与计算复杂性的关键. 提出了一个新的结构选择标准,并设计了一个结构选择算法. 首先,在高斯Markov随机场上定义了耦合度和类耦合度概念来度量变量簇间的依赖关系,证明了高斯均值场的耦合度-精度定理,并进一步给出了类耦合度结构选择指标;然后,结合类耦合度指标和变量簇归一化技术,设计了一个高斯均值场结构选择算法;通过对比实验验证了算法的有效性.  相似文献   

6.
多Agent动态影响图模型适合于对动态环境中多Agent问题进行建模,Agent之间结构关系被表示成局部的概率因式形式.概率图模型推理所面临的一个主要问题是难以实现近似推理的精度和复杂性之间的均衡.近似推理方法可提高推理精度,但同时也会带来推理精度的损失.BK和粒子滤波(PF)是动态概率模型两种重要的近似推理算法,BK算法有较高的计算效率但会引入较大的误差,PF可以近似任意分布但存在计算的高维问题.结合BK和PF的优点,提出多Agent动态影响图(MADIDs)的一种混合近似推理算法.根据概率图模型的可分解性,将MADIDs分解生成用于推理的原型联合树,混合近似推理算法在规模复杂度较小的团上执行PF推理以达到局部最佳估计,而在其他的团上执行BK推理,为了减小推理误差引入了分割团.仿真实验表明混合近似推理算法是MADIDs模型的一种有效推理方法,与BK和PF算法相比,该算法显著提高了推理精度,且可以实现推理精度和时间复杂性之间的均衡.  相似文献   

7.
不确定性推理方法是人工智能领域的一个主要研究内容,If-then规则是人工智能领域最常见的知识表示方法. 文章针对实际问题往往具有不确定性的特点,提出基于证据推理的确定因子规则库推理方法.首先在If-then规则的基础上给出确定因子结构和确定因子规则库知识表示方法,该方法可以有效利用各种类型的不确定性信息,充分考虑了前提、结论以及规则本身的多种不确定性. 然后,提出了基于证据推理的确定因子规则库推理方法. 该方法通过将已知事实与规则前提进行匹配,推断结论并得到已知事实条件下的前提确定因子;进一步,根据证据推理算法得到结论的确定因子. 文章最后,通过基于证据推理的确定因子规则库推理方法在UCI数据集分类问题的应用算例,说明该方法的可行性和高效性.  相似文献   

8.
恽鹏  吴盘龙  李星秀  何山 《自动化学报》2022,48(10):2486-2495
针对杂波环境下的目标跟踪问题,提出了一种基于变分贝叶斯的概率数据关联算法(Variational Bayesian based probabilistic data association algorithm, VB-PDA).该算法首先将关联事件视为一个随机变量并利用多项分布对其进行建模,随后基于数据集、目标状态、关联事件的联合概率密度函数求取关联事件的后验概率密度函数,最后将关联事件的后验概率密度函数引入变分贝叶斯框架中以获取状态近似后验概率密度函数.相比于概率数据关联算法, VB-PDA算法在提高算法实时性的同时在权重Kullback-Leibler (KL)平均准则下获取了近似程度更高的状态后验概率密度函数.相关仿真实验对提出算法的有效性进行了验证.  相似文献   

9.
陈亚瑞 《计算机科学》2013,40(2):253-256,288
图模型概率推理的主要任务是通过对联合概率分布进行变量求和来计算配分函数、变量边缘概率分布、条件 概率分布等。图模型概率推理计算复杂性及近似概率推理的计算复杂性是一重要的理论问题,也是设计概率推理算 法和近似概率推理算法的理论基础。研究了Ising图模型概率推理的计算复杂性,包括概率推理的难解性及不可近似 性。具体地,通过构建#2 SA"I'问题到Icing图模型概率推理问题的多项式时间计数归约,证明在一般 Ising图模型上 计算配分函数、变量边缘概率分布、条件概率分布的概率推理问题是#P难的,同时证明Icing图模型近似概率推理问 题是NP难的,即一般Icing图模型上的概率推理问题是难解且不可近似的。  相似文献   

10.
从各种低层上下文信息得到对人们更加有用的高层上下文信息即上下文推理是当前研究的热点.针对该问题,采用描述逻辑,研究基于本体模型的上下文推理方法.首先简要介绍基于本体的上下文模型,该模型增加了对上下文特性的建模,然后分别研究基于本体的推理、基于规则的推理及不一致性验证3种推理方式,借助Jena框架的推理接口实现,推理功能全面,通用性强,基本满足了普适计算系统中上下文推理的需求,最后给出了推理的可用性.  相似文献   

11.
An Introduction to Variational Methods for Graphical Models   总被引:20,自引:0,他引:20  
This paper presents a tutorial introduction to the use of variational methods for inference and learning in graphical models (Bayesian networks and Markov random fields). We present a number of examples of graphical models, including the QMR-DT database, the sigmoid belief network, the Boltzmann machine, and several variants of hidden Markov models, in which it is infeasible to run exact inference algorithms. We then introduce variational methods, which exploit laws of large numbers to transform the original graphical model into a simplified graphical model in which inference is efficient. Inference in the simpified model provides bounds on probabilities of interest in the original model. We describe a general framework for generating variational transformations based on convex duality. Finally we return to the examples and demonstrate how variational algorithms can be formulated in each case.  相似文献   

12.
李绍园  韦梦龙  黄圣君 《软件学报》2022,33(4):1274-1286
传统监督学习需要训练样本的真实标记信息,而在很多情况下,真实标记并不容易收集.与之对比,众包学习从多个可能犯错的非专家收集标注,通过某种融合方式估计样本的真实标记.注意到现有深度众包学习工作对标注者相关性建模不足,而非深度众包学习方面的工作表明,标注者相关性建模利用有助于改善学习效果.提出一种深度生成式众包学习方法,以...  相似文献   

13.
柴变芳  贾彩燕  于剑 《软件学报》2014,25(12):2753-2766
随着万维网和在线社交网站的发展,规模大、结构复杂、动态性强的大规模网络应用而生。发现这些网络的潜在结构,是分析和理解网络数据的基本途径。概率模型以其灵活的建模和解释能力、坚实的理论框架成为各领域研究网络结构发现任务的有效工具,但该类方法存在计算瓶颈。近几年出现了一些基于概率模型的大规模网络结构发现方法,主要从网络表示、结构假设、参数求解这3个方面解决计算问题。按照模型参数求解策略将已有方法归为两类:随机变分推理(stochastic variational inference)方法和在线EM(online expectation maximazation)方法,详细分析各方法的设计动机、原理和优缺点。定性和定量地对比、分析典型方法的特点和性能,并提出大规模网络结构发现模型的设计原则。最后,概括该领域研究的核心问题,展望未来发展趋势。  相似文献   

14.
Variational Bayesian Expectation-Maximization (VBEM), an approximate inference method for probabilistic models based on factorizing over latent variables and model parameters, has been a standard technique for practical Bayesian inference. In this paper, we introduce a more general approximate inference framework for conjugate-exponential family models, which we call Latent-Space Variational Bayes (LSVB). In this approach, we integrate out model parameters in an exact way, leaving only the latent variables. It can be shown that the LSVB approach gives better estimates of the model evidence as well as the distribution over latent variables than the VBEM approach, but in practice, the distribution over latent variables has to be approximated. As a practical implementation, we present a First-order LSVB (FoLSVB) algorithm to approximate this distribution over latent variables. From this approximate distribution, one can estimate the model evidence and the posterior over model parameters. The FoLSVB algorithm is directly comparable to the VBEM algorithm and has the same computational complexity. We discuss how LSVB generalizes the recently proposed collapsed variational methods [20] to general conjugate-exponential families. Examples based on mixtures of Gaussians and mixtures of Bernoullis with synthetic and real-world data sets are used to illustrate some advantages of our method over VBEM.  相似文献   

15.
Traditional supervised learning requires the groundtruth labels for the training data, which can be difficult to collect in many cases. In contrast, crowdsourcing learning collects noisy annotations from multiple non-expert workers and infers the latent true labels through some aggregation approach. In this paper, we notice that existing deep crowdsourcing work does not sufficiently model worker correlations, which is, however, shown to be helpful for learning by previous non-deep learning approaches. We propose a deep generative crowdsourcing learning approach to incorporate the strengths of Deep Neural Networks (DNNs) and exploit worker correlations. The model comprises a DNN classifier as a prior and an annotation generation process. A mixture model of workers'' capabilities within each class is introduced into the annotation generation process for worker correlation modeling. For adaptive trade-off between model complexity and data fitting, we implement fully Bayesian inference. Based on the natural-gradient stochastic variational inference techniques developed for the Structured Variational AutoEncoder (SVAE), we combine variational message passing for conjugate parameters and stochastic gradient descent for DNN parameters into a unified framework for efficient end-to-end optimization. Experimental results on 22 real crowdsourcing datasets demonstrate the effectiveness of the proposed approach.  相似文献   

16.
We present a probabilistic framework namely, multiscale generative models known as dynamic trees (DT), for unsupervised image segmentation and subsequent matching of segmented regions in a given set of images. Beyond these novel applications of DTs, we propose important additions for this modeling paradigm. First, we introduce a novel DT architecture, where multilayered observable data are incorporated at all scales of the model. Second, we derive a novel probabilistic inference algorithm for DTs, structured variational approximation (SVA), which explicitly accounts for the statistical dependence of node positions and model structure in the approximate posterior distribution, thereby relaxing poorly justified independence assumptions in previous work. Finally, we propose a similarity measure for matching dynamic-tree models, representing segmented image regions, across images. Our results for several data sets show that DTs are capable of capturing important component-subcomponent relationships among objects and their parts, and that DTs perform well in segmenting images into plausible pixel clusters. We demonstrate the significantly improved properties of the SVA algorithm, both in terms of substantially faster convergence rates and larger approximate posteriors for the inferred models, when compared with competing inference algorithms. Furthermore, results on unsupervised object recognition demonstrate the viability of the proposed similarity measure for matching dynamic-structure statistical models.  相似文献   

17.
Prediction intervals (PIs) for industrial time series can provide useful guidance for workers. Given that the failure of industrial sensors may cause the missing point in inputs, the existing kernel dynamic Bayesian networks (KDBN), serving as an effective method for PIs construction, suffer from high computational load using the stochastic algorithm for inference. This study proposes a variational inference method for the KDBN for the purpose of fast inference, which avoids the time-consuming stochastic sampling. The proposed algorithm contains two stages. The first stage involves the inference of the missing inputs by using a local linearization based variational inference, and based on the computed posterior distributions over the missing inputs the second stage sees a Gaussian approximation for probability over the nodes in future time slices. To verify the effectiveness of the proposed method, a synthetic dataset and a practical dataset of generation flow of blast furnace gas (BFG) are employed with different ratios of missing inputs. The experimental results indicate that the proposed method can provide reliable PIs for the generation flow of BFG and it exhibits shorter computing time than the stochastic based one.   相似文献   

18.
深度生成模型综述   总被引:4,自引:2,他引:2  
通过学习可观测数据的概率密度而随机生成样本的生成模型在近年来受到人们的广泛关注,网络结构中包含多个隐藏层的深度生成式模型以更出色的生成能力成为研究热点,深度生成模型在计算机视觉、密度估计、自然语言和语音识别、半监督学习等领域得到成功应用,并给无监督学习提供了良好的范式.本文根据深度生成模型处理似然函数的不同方法将模型分...  相似文献   

19.
Processing lineages (also called provenances) over uncertain data consists in tracing the origin of uncertainty based on the process of data production and evolution. In this paper, we focus on the representation and processing of lineages over uncertain data, where we adopt Bayesian network (BN), one of the popular and important probabilistic graphical models (PGMs), as the framework of uncertainty representation and inferences. Starting from the lineage expressed as Boolean formulae for SPJ (Selection–Projection–Join) queries over uncertain data, we propose a method to transform the lineage expression into directed acyclic graphs (DAGs) equivalently. Specifically, we discuss the corresponding probabilistic semantics and properties to guarantee that the graphical model can support effective probabilistic inferences in lineage processing theoretically. Then, we propose the function-based method to compute the conditional probability table (CPT) for each node in the DAG. The BN for representing lineage expressions over uncertain data, called lineage BN and abbreviated as LBN, can be constructed while generally suitable for both safe and unsafe query plans. Therefore, we give the variable-elimination-based algorithm for LBN's exact inferences to obtain the probabilities of query results, called LBN-based query processing. Then, we focus on obtaining the probabilities of inputs or intermediate tuples conditioned on query results, called LBN-based inference query processing, and give the Gibbs-sampling-based algorithm for LBN's approximate inferences. Experimental results show the efficiency and effectiveness of our methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号