Gradient-based boosting for statistical relational learning: The relational dependency network case |
| |
Authors: | Sriraam Natarajan Tushar Khot Kristian Kersting Bernd Gutmann Jude Shavlik |
| |
Affiliation: | (1) Department of Computer Science and Engineering, University of Washington, Seattle, WA 98195-2350, USA;(2) Department of Computer and Information Science, University of Oregon, Eugene, OR 97403-1202, USA;(3) Microsoft Research, Redmond, WA 98052, USA;(4) Department of Computer Science, The University of Texas at Austin, 1616 Guadalupe, Suite 2408, Austin, TX 78701-0233, USA |
| |
Abstract: | Dependency networks approximate a joint probability distribution over multiple random variables as a product of conditional distributions. Relational Dependency Networks (RDNs) are graphical models that extend dependency networks to relational domains. This higher expressivity, however, comes at the expense of a more complex model-selection problem: an unbounded number of relational abstraction levels might need to be explored. Whereas current learning approaches for RDNs learn a single probability tree per random variable, we propose to turn the problem into a series of relational function-approximation problems using gradient-based boosting. In doing so, one can easily induce highly complex features over several iterations and in turn estimate quickly a very expressive model. Our experimental results in several different data sets show that this boosting method results in efficient learning of RDNs when compared to state-of-the-art statistical relational learning approaches. |
| |
Keywords: | |
本文献已被 SpringerLink 等数据库收录! |
|