Maximizing Agreements with One-Sided Error with Applications to Heuristic Learning |
| |
Authors: | Nader H Bshouty Lynn Burroughs |
| |
Affiliation: | (1) Department of Computer Science, Technion, Haifa, 32000, Israel;(2) Department of Computer Science, University of Calgary, Calgary, Alberta, T2N 1N4, Canada |
| |
Abstract: | We study heuristic learnability of classes of Boolean formulas, a model proposed by Pitt and Valiant. In this type of example-based learning of a concept class C by a hypothesis class H, the learner seeks a hypothesis h H that agrees with all of the negative (resp. positive) examples, and a maximum number of positive (resp. negative) examples. This learning is equivalent to the problem of maximizing agreement with a training sample, with the constraint that the misclassifications be limited to examples with positive (resp. negative) labels. Several recent papers have studied the more general problem of maximizing agreements without this one-sided error constraint. We show that for many classes (though not all), the maximum agreement problem with one-sided error is more difficult than the general maximum agreement problem. We then provide lower bounds on the approximability of these one-sided error problems, for many concept classes, including Halfspaces, Decision Lists, XOR, k-term DNF, and neural nets.Editor Philip M. LongThis research was supported by the fund for promotion of research at the Technion. Research no. 120-025. |
| |
Keywords: | maximizing agreements heuristic learning example-based learning agnostic learning Boolean formulas approximation |
本文献已被 SpringerLink 等数据库收录! |
|