共查询到20条相似文献,搜索用时 0 毫秒
1.
Manfred Jaeger 《Annals of Mathematics and Artificial Intelligence》2001,32(1-4):179-220
A number of representation systems have been proposed that extend the purely propositional Bayesian network paradigm with representation tools for some types of first-order probabilistic dependencies. Examples of such systems are dynamic Bayesian networks and systems for knowledge based model construction. We can identify the representation of probabilistic relational models as a common well-defined semantic core of such systems.Recursive relational Bayesian networks (RRBNs) are a framework for the representation of probabilistic relational models. A main design goal for RRBNs is to achieve greatest possible expressiveness with as few elementary syntactic constructs as possible. The advantage of such an approach is that a system based on a small number of elementary constructs will be much more amenable to a thorough mathematical investigation of its semantic and algorithmic properties than a system based on a larger number of high-level constructs. In this paper we show that with RRBNs we have achieved our goal, by showing, first, how to solve within that framework a number of non-trivial representation problems. In the second part of the paper we show how to construct from a RRBN and a specific query, a standard Bayesian network in which the answer to the query can be computed with standard inference algorithms. Here the simplicity of the underlying representation framework greatly facilitates the development of simple algorithms and correctness proofs. As a result we obtain a construction algorithm that even for RRBNs that represent models for complex first-order and statistical dependencies generates standard Bayesian networks of size polynomial in the size of the domain given in a specific application instance. 相似文献
2.
There is great interest in understanding the intrinsic knowledge neural networks have acquired during training. Most work in this direction is focussed on the multi-layer perceptron architecture. The topic of this paper is networks of Gaussian basis functions which are used extensively as learning systems in neural computation. We show that networks of Gaussian basis functions can be generated from simple probabilistic rules. Also, if appropriate learning rules are used, probabilistic rules can be extracted from trained networks. We present methods for the reduction of network complexity with the goal of obtaining concise and meaningful rules. We show how prior knowledge can be refined or supplemented using data by employing either a Bayesian approach, by a weighted combination of knowledge bases, or by generating artificial training data representing the prior knowledge. We validate our approach using a standard statistical data set. 相似文献
3.
混合贝叶斯网络隐藏变量学习研究 总被引:6,自引:0,他引:6
目前,具有已知结构的隐藏变量学习主要针对具有离散变量的贝叶斯网和具有连续变量的高斯网.该文给出了具有连续和离散变量的混合贝叶斯网络隐藏变量学习方法.该方法不需要离散化连续变量,依据专业知识或贝叶斯网络道德图中Cliques的维数发现隐藏变量的位置,基于依赖结构(星形结构或先验结构)和Gibbs抽样确定隐藏变量的值,结合扩展的MDL标准和统计方法发现隐藏变量的最优维数.实验结果表明,这种方法能够有效地进行具有已知结构的混合贝叶斯网络隐藏变量学习. 相似文献
4.
5.
目前,学习具有隐藏变量的贝叶斯网络结构主要采用结合EM算法的打分-搜索方法,其效率和可靠性低.本文针对此问题建立一种新的具有隐藏变量贝叶斯网络结构学习方法.该方法首先依据变量之间基本依赖关系、基本结构和依赖分析思想进行不考虑隐藏变量的贝叶斯网络结构学习,然后利用贝叶斯网络道德图中的Cliques发现隐藏变量的位置,最后基于依赖结构、Gibbs sampling和MDL标准确定隐藏变量的取值、维数和局部结构.该方法能够避免标准Gibbs sampling的指数复杂性问题和现有学习方法存在的主要问题.实验结果表明,该方法能够有效进行具有隐藏变量的贝叶斯网络结构学习. 相似文献
6.
A Bayesian Method for the Induction of Probabilistic Networks from Data 总被引:108,自引:3,他引:108
This paper presents a Bayesian method for constructing probabilistic networks from databases. In particular, we focus on constructing Bayesian belief networks. Potential applications include computer-assisted hypothesis testing, automated scientific discovery, and automated construction of probabilistic expert systems. We extend the basic method to handle missing data and hidden (latent) variables. We show how to perform probabilistic inference by averaging over the inferences of multiple belief networks. Results are presented of a preliminary evaluation of an algorithm for constructing a belief network from a database of cases. Finally, we relate the methods in this paper to previous work, and we discuss open problems. 相似文献
7.
递归调节算法作为一种贝叶斯网络精确推理算法由于存储结果的个数是任意的,存储所占用的内存大小是可变的,所以该算法在空间上是自由的。然而当内存空间有限时,又希望节省计算时间,存储哪些计算结果就成了关键问题。针对此问题本文提出了应用深度优先分支定界法寻找存储占用空间的所有可能性,然后通过任意空间下推理时间的求解公式得到相应的推理时间,进而构造贝叶斯网络推理的时间—空间曲线,通过所构造的曲线找出最优离散存储策略,得到时间和空间之间的最佳的结合点,以最小时间代价换取了最大的存储空间。 相似文献
8.
Incorporating Prior Knowledge in the Form of Production Rules into Neural Networks Using Boolean-Like Neurons 总被引:1,自引:0,他引:1
At present, nearly all neural networks are formulated by learning only from examples or patterns. For a real-word problem, some forms of prior knowledge in a non-example form always exist. Incorporation of prior knowledge will benefit the formulation of neural networks. Prior knowledge could be in several forms. Production rule is one form in which the prior knowledge is frequently represented. This paper proposes an approach to incorporate production rules into neural networks. A newly defined neuron architecture, Boolean-like neuron, is proposed. With this Boolean-like neuron, production rules can be encoded into the neural network during the network initialization period. Experiments are described in this paper. The results show that the incorporation of this prior knowledge can not only increase the training speed, but also the explainability of the neural networks. 相似文献
9.
当数据存在缺值时,通常应用EM算法学习贝叶斯网络.然而,EM算法以联合似然作为目标函数,与判别预测问题的目标相偏离.与EM算法不同,CEM(Conditional Expectation Maximum)算法直接以条件似然作为目标函数.研究了判别贝叶斯网络学习的CEM算法,提出一种使得CEM算法具有单调性和收敛性的Q函数.为了简化计算,在CEM算法的E步,应用Q函数的一种简化形式;在CEM算法的M步,应用梯度下降法的一次搜索结果作为最优值的近似.最后,在UCI数据集上的实验结果表明了CEM算法在判别贝叶斯网络学习中的有效性. 相似文献
10.
网络拥塞已经成为了人们普遍关注的问题。目前,针对网络拥塞控制的研究,提出了一些具体的方法和建议,但其都不能很好地解决网络拥塞。定性动态概率网络(QDPNs)是目前进行动态地推理不确定知识领域最有效的模型之一。提出了一种基于定性动态概率网络的网络拥塞控制,该方法能够有效地对网络拥塞进行预测,在预测的基础上给出了相应的拥塞控制策略。 相似文献
11.
Karim A. Tahboub 《Journal of Intelligent and Robotic Systems》2006,45(1):31-52
In this article, a novel human–machine interaction based on the machine intention recognition of the human is presented. This work is motivated by the desire that intelligent machines as robots imitate human–human interaction, that is to minimize the need for classical direct human–machine interface and communication. A philosophical and technical background for intention recognition is discussed. Here, the intention–action–state scenario is modified and modeled by Dynamic Bayesian Networks to facilitate for probabilistic intention inference. The recognized intention, then, drives the interactive behavior of the machine such that it complies with the human intention in light of the real state of the world. An illustrative example of a human commanding a mobile robot remotely is given and discussed in details. 相似文献
12.
基于遗传算法的Bayesian网中连续变量离散化的研究 总被引:5,自引:1,他引:5
文中如何从含有离散变量和连续变量的混合数据中学习Bayesian网进行了研究,提出了一种基于遗传算法的连续变量散化算法,在该处中给出了兼顾离散模型准确度和复杂度的适应度函数;并基于对离散化的实质性分析,定义了离散策略等价的概念,由此制定了离散策略的编码方案;进一步设计了变换离散策略的遗传算法。算法不存在局部极值问题,且不需要事先给定变量序关系,模拟实验结果表明,该算法能有效地对连续变量散化,从而使得从混合数据中学到的Bayesian网具有较好性能。 相似文献
13.
Anna Maria Bianucci Alessio Micheli Alessandro Sperduti Antonina Starita 《Applied Intelligence》2000,12(1-2):117-147
We present the application of Cascade Correlation for structures to QSPR (quantitative structure-property relationships) and QSAR (quantitative structure-activity relationships) analysis. Cascade Correlation for structures is a neural network model recently proposed for the processing of structured data. This allows the direct treatment of chemical compounds as labeled trees, which constitutes a novel approach to QSPR/QSAR. We report the results obtained for QSPR on Alkanes (predicting the boiling point) and QSAR of a class of Benzodiazepines. Our approach compares favorably versus the traditional QSAR treatment based on equations and it is competitive with ad hoc MLPs for the QSPR problem. 相似文献
14.
态势估计贝叶斯网络的面向对象知识表示方法 总被引:1,自引:0,他引:1
知识表示是大规模贝叶斯网络的一个难题。论文以Koller提出的面向对象贝叶斯网络为基础,讨论态势估计贝叶斯网络的面向对象知识表示方法,并以关系数据库实现网络的存储与访问。 相似文献
15.
经典隐马尔可夫模型用于语音识别存在的两个主要缺陷是“离散状态假设”和“独立分布假设”。前者忽略了语音信号的非平稳性,后者忽略了语音信号的相关性。文章将混合因子分析方法用于语音建模,提出了基于混合因子分析的隐马尔可夫模型框架,并用动态贝叶斯网络形象地表示。该模型框架不仅从理论上解决了上述问题,而且给出许多语音建模的选择。目前广泛使用的统计声学模型均可视为该模型的特例。 相似文献
16.
Conati Cristina Gertner Abigail VanLehn Kurt 《User Modeling and User-Adapted Interaction》2002,12(4):371-417
When a tutoring system aims to provide students with interactive help, it needs to know what knowledge the student has and what goals the student is currently trying to achieve. That is, it must do both assessment and plan recognition. These modeling tasks involve a high level of uncertainty when students are allowed to follow various lines of reasoning and are not required to show all their reasoning explicitly. We use Bayesian networks as a comprehensive, sound formalism to handle this uncertainty. Using Bayesian networks, we have devised the probabilistic student models for Andes, a tutoring system for Newtonian physics whose philosophy is to maximize student initiative and freedom during the pedagogical interaction. Andes’ models provide long-term knowledge assessment, plan recognition, and prediction of students’ actions during problem solving, as well as assessment of students’ knowledge and understanding as students read and explain worked out examples. In this paper, we describe the basic mechanisms that allow Andes’ student models to soundly perform assessment and plan recognition, as well as the Bayesian network solutions to issues that arose in scaling up the model to a full-scale, field evaluated application. We also summarize the results of several evaluations of Andes which provide evidence on the accuracy of its student models.This revised version was published online in July 2005 with corrections to the author name VanLehn. 相似文献
17.
Petri Myllymäki 《Applied Intelligence》1999,11(1):31-44
We present a method for mapping a given Bayesian network to a Boltzmann machine architecture, in the sense that the the updating process of the resulting Boltzmann machine model probably converges to a state which can be mapped back to a maximum a posteriori (MAP) probability state in the probability distribution represented by the Bayesian network. The Boltzmann machine model can be implemented efficiently on massively parallel hardware, since the resulting structure can be divided into two separate clusters where all the nodes in one cluster can be updated simultaneously. This means that the proposed mapping can be used for providing Bayesian network models with a massively parallel probabilistic reasoning module, capable of finding the MAP states in a computationally efficient manner. From the neural network point of view, the mapping from a Bayesian network to a Boltzmann machine can be seen as a method for automatically determining the structure and the connection weights of a Boltzmann machine by incorporating high-level, probabilistic information directly into the neural network architecture, without recourse to a time-consuming and unreliable learning process. 相似文献
18.
A. Hunter 《Neural computing & applications》2000,9(2):124-132
Selection of input variables (features) is a key stage in building predictive models. As exhaustive evaluation of potential feature sets using full non-linear models is impractical, it is common practice to use simple fast-evaluating models and heuristic selection strategies. This paper discusses a fast, efficient, and powerful non-linear input selection procedure using a combination of probabilistic neural networks and repeated bitwise gradient descent with resampling. The algorithm is compared with forward selection, backward selection and genetic algorithms using a selection of real-world data sets. The algorithm has comparative performance and greatly reduced execution time with respect to these alternative approaches. 相似文献
19.
Feedforward neural networks (FNN) have been proposed to solve complex problems in pattern recognition, classification and function approximation. Despite the general success of learning methods for FNN, such as the backpropagation (BP) algorithm, second-order algorithms, long learning time for convergence remains a problem to be overcome. In this paper, we propose a new hybrid algorithm for a FNN that combines unsupervised training for the hidden neurons (Kohonen algorithm) and supervised training for the output neurons (gradient descent method). Simulation results show the effectiveness of the proposed algorithm compared with other well-known learning methods. 相似文献
20.
We describe a probabilistic approach for the interpretation of user arguments, and investigate the incorporation of different
models of a user’s beliefs and inferences into this mechanism. Our approach is based on the tenet that the interpretation
intended by the user is that with the highest posterior probability. This approach is implemented in a computer-based detective
game, where the user explores a virtual scenario, and constructs an argument for a suspect’s guilt or innocence. Our system
receives as input an argument entered through a web interface, and produces an interpretation in terms of its underlying knowledge
representation – a Bayesian network. This interpretation may differ from the user’s argument in its structure and in its beliefs
in the argument propositions. We conducted a synthetic evaluation of the basic interpretation mechanism, and a user-based
evaluation which assesses the impact of the different user models. The results of both evaluations were encouraging, with
the system generally producing argument interpretations our users found acceptable.
The revised version of this article was published in July 2005 with corrections to Table II. 相似文献