首页 | 本学科首页   官方微博 | 高级检索  
     


Adapt Bagging to Nearest Neighbor Classifiers
Authors:Zhi-Hua?Zhou  author-information"  >  author-information__contact u-icon-before"  >  mailto:zhouzh@nju.edu.cn"   title="  zhouzh@nju.edu.cn"   itemprop="  email"   data-track="  click"   data-track-action="  Email author"   data-track-label="  "  >Email author,Yang?Yu
Affiliation:(1) National Research Council (CNR), Institute for High Performance Computing and Networking (ICAR), Via P. Bucci 41C, 87036 Rende, CS, Italy
Abstract:It is well-known that in order to build a strong ensemble, the component learners should be with high diversity as well as high accuracy. If perturbing the training set can cause significant changes in the component learners constructed, then Bagging can effectively improve accuracy. However, for stable learners such as nearest neighbor classifiers, perturbing the training set can hardly produce diverse component learners, therefore Bagging does not work well. This paper adapts Bagging to nearest neighbor classifiers through injecting randomness to distance metrics. In constructing the component learners, both the training set and the distance metric employed for identifying the neighbors are perturbed. A large scale empirical study reported in this paper shows that the proposed BagInRand algorithm can effectively improve the accuracy of nearest neighbor classifiers.
Keywords:bagging   data mining   ensemble learning   machine learning   Minkowsky distance   nearest neighbor   value difference metric  
本文献已被 CNKI 维普 万方数据 SpringerLink 等数据库收录!
点击此处可从《计算机科学技术学报》浏览原始摘要信息
点击此处可从《计算机科学技术学报》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号