首页 | 本学科首页   官方微博 | 高级检索  
     

关系tri-training:利用无标记数据学习一阶规则
引用本文:李艳娟,郭茂祖.关系tri-training:利用无标记数据学习一阶规则[J].计算机科学与探索,2012(5):430-442.
作者姓名:李艳娟  郭茂祖
作者单位:1. 哈尔滨工业大学计算机科学与技术学院,哈尔滨150001;东北林业大学信息与计算机工程学院,哈尔滨150040
2. 哈尔滨工业大学计算机科学与技术学院,哈尔滨,150001
基金项目:国家自然科学基金Nos.61171185, 60932008, 60832010;高等学校博士学科点专项科研基金No.20112302110040;中国博士后科学基金Nos.201003446, 20110491059;中央高校基本科研业务费专项资金No.DL12AB02~~
摘    要:针对目前归纳逻辑程序设计(inductive logic programming,ILP)系统要求训练数据充分且无法利用无标记数据的不足,提出了一种利用无标记数据学习一阶规则的算法——关系tri-training(relational-tri-training,R-tri-training)算法。该算法将基于命题逻辑表示的半监督学习算法tri-training的思想引入到基于一阶逻辑表示的ILP系统,在ILP框架下研究如何利用无标记样例信息辅助分类器训练。R-tri-training算法首先根据标记数据和背景知识初始化三个不同的ILP系统,然后迭代地用无标记样例对三个分类器进行精化,即如果两个分类器对一个无标记样例的标记结果一致,则在一定条件下该样例将被标记给另一个分类器作为新的训练样例。标准数据集上实验结果表明:R-tri-training能有效地利用无标记数据提高学习性能,且R-tri-training算法性能优于GILP(genetic inductive logic programming)、NFOIL、KFOIL和ALEPH。

关 键 词:机器学习  归纳逻辑程序设计(ILP)  关系tri-training  概率近似正确(PAC)可学习

Relational-Tri-Training: Learning First-Order Rules Exploiting Unlabeled Data
LI Yanjuan , GUO Maozu.Relational-Tri-Training: Learning First-Order Rules Exploiting Unlabeled Data[J].Journal of Frontier of Computer Science and Technology,2012(5):430-442.
Authors:LI Yanjuan  GUO Maozu
Affiliation:1. School of Computer Science and Technology, Harbin Institute of Technology, Harbin 150001, China 2. School of Information and Computer Engineering, Northeast Forestry University, Harbin 150040, China
Abstract:For the current inductive logic programming (ILP) system, the sufficient training datasets are required and the unlabeled data cannot be used. To solve this limitation, this paper introduces a first-order rule-learning algorithm exploiting the unlabeled data, named relational-tri-training (R-tri-training). This algorithm combines the tri-training based on propositional logic representation and ILP based on first-order logic representation, investigates the issue how to improve the performance of classifiers using the unlabeled data under the framework of ILP. Three different ILP systems are initialized according to the labeled data and the background knowledge, and then the three classifiers are refined by iteratively using the unlabeled data. That is, under special condition, the unlabeled data are going to be labeled to one classifier as the new training data when the same labeled results are given by the other two classifiers. Experimental results on the well-known benchmarks show that R-tri-training can effectively enhance the learning performance by exploiting the unlabeled data, and the performance of R-tri-training is better than genetic inductive logic programming (GILP), NFOIL, KFOIL and ALEPH.
Keywords:machine learning  inductive logic programming (ILP)  relational-tri-training  probability approximately correct (PAC) learning
本文献已被 CNKI 万方数据 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号