首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 156 毫秒
1.
实体关系自动抽取   总被引:36,自引:7,他引:36  
实体关系抽取是信息抽取领域中的重要研究课题。本文使用两种基于特征向量的机器学习算法,Winnow 和支持向量机(SVM) ,在2004 年ACE(Automatic Content Extraction) 评测的训练数据上进行实体关系抽取实验。两种算法都进行适当的特征选择,当选择每个实体的左右两个词为特征时,达到最好的抽取效果,Winnow和SVM算法的加权平均F-Score 分别为73108 %和73127 %。可见在使用相同的特征集,不同的学习算法进行实体关系的识别时,最终性能差别不大。因此使用自动的方法进行实体关系抽取时,应当集中精力寻找好的特征。  相似文献   

2.
The performance of two online linear classifiers—the Perceptron and Littlestone’s Winnow—is explored for two anti-spam filtering benchmark corpora—PU1 and Ling-Spam. We study the performance for varying numbers of features, along with three different feature selection methods: information gain (IG), document frequency (DF) and odds ratio. The size of the training set and the number of training iterations are also investigated for both classifiers. The experimental results show that both the Perceptron and Winnow perform much better when using IG or DF than using odds ratio. It is further demonstrated that when using IG or DF, the classifiers are insensitive to the number of features and the number of training iterations, and not greatly sensitive to the size of training set. Winnow is shown to slightly outperform the Perceptron. It is also demonstrated that both of these online classifiers perform much better than a standard Naïve Bayes method. The theoretical and implementation computational complexity of these two classifiers are very low, and they are very easily adaptively updated. They outperform most of the published results, while being significantly easier to train and adapt. The analysis and promising experimental results indicate that the Perceptron and Winnow are two very competitive classifiers for anti-spam filtering.  相似文献   

3.
Gentile  Claudio 《Machine Learning》2003,53(3):265-299
We consider two on-line learning frameworks: binary classification through linear threshold functions and linear regression. We study a family of on-line algorithms, called p-norm algorithms, introduced by Grove, Littlestone and Schuurmans in the context of deterministic binary classification. We show how to adapt these algorithms for use in the regression setting, and prove worst-case bounds on the square loss, using a technique from Kivinen and Warmuth. As pointed out by Grove, et al., these algorithms can be made to approach a version of the classification algorithm Winnow as p goes to infinity; similarly they can be made to approach the corresponding regression algorithm EG in the limit. Winnow and EG are notable for having loss bounds that grow only logarithmically in the dimension of the instance space. Here we describe another way to use the p-norm algorithms to achieve this logarithmic behavior. With the way to use them that we propose, it is less critical than with Winnow and EG to retune the parameters of the algorithm as the learning task changes. Since the correct setting of the parameters depends on characteristics of the learning task that are not typically known a priori by the learner, this gives the p-norm algorithms a desireable robustness. Our elaborations yield various new loss bounds in these on-line settings. Some of these bounds improve or generalize known results. Others are incomparable.  相似文献   

4.
英文文本中的真词错误即输入的错词是和原词相似的另一个有效词。该文主要研究了对该类错误的检测。通过从所要检测的单词的上下文中提取句法和语义两个方面的特征,运用文档频率和信息增益进行特征筛选,实现了对上下文特征的有效提取。最终把判断该单词使用的正确与否看作分类问题,使用 Winnow分类算法进行训练和测试。通过5阶交叉验证,所收集的61组混淆集的平均正确率与召回率分别为96%,79.47%。  相似文献   

5.
The number of adjustments required to learn the average LTU function of d features, each of which can take on n equally spaced values, grows as approximately n2d when the standard perceptron training algorithm is used on the complete input space of n points and perfect classification is required. We demonstrate a simple modification that reduces the observed growth rate in the number of adjustments to approximately d2(log (d) + log(n)) with most, but not all input presentation orders. A similar speed-up is also produced by applying the simple but computationally expensive heuristic ";don't overgeneralize" to the standard training algorithm. This performance is very close to the theoretical optimum for learning LTU functions by any method, and is evidence that perceptron-like learning algorithms can learn arbitrary LTU functions in polynomial, rather than exponential time under normal training conditions. Similar modifications can be applied to the Winnow algorithm, achieving similar performance improvements and demonstrating the generality of the approach.  相似文献   

6.
Many important machine learning models, supervised and unsupervised, are based on simple Euclidean distance or orthogonal projection in a high dimensional feature space. When estimating such models from small training sets we face the problem that the span of the training data set input vectors is not the full input space. Hence, when applying the model to future data the model is effectively blind to the missed orthogonal subspace. This can lead to an inflated variance of hidden variables estimated in the training set and when the model is applied to test data we may find that the hidden variables follow a different probability law with less variance. While the problem and basic means to reconstruct and deflate are well understood in unsupervised learning, the case of supervised learning is less well understood. We here investigate the effect of variance inflation in supervised learning including the case of Support Vector Machines (SVMS) and we propose a non-parametric scheme to restore proper generalizability. We illustrate the algorithm and its ability to restore performance on a wide range of benchmark data sets.  相似文献   

7.
It is easy to design on-line learning algorithms for learning k out of n variable monotone disjunctions by simply keeping one weight per disjunction. Such algorithms use roughly O(nk) weights which can be prohibitively expensive. Surprisingly, algorithms like Winnow require only n weights (one per variable or attribute) and the mistake bound of these algorithms is not too much worse than the mistake bound of the more costly algorithms. The purpose of this paper is to investigate how exponentially many weights can be collapsed into only O(n) weights. In particular, we consider probabilistic assumptions that enable the Bayes optimal algorithm's posterior over the disjunctions to be encoded with only O(n) weights. This results in a new O(n) algorithm for learning disjunctions which is related to the Bylander's BEG algorithm originally introduced for linear regression. Besides providing a Bayesian interpretation for this new algorithm, we are also able to obtain mistake bounds for the noise free case resembling those that have been derived for the Winnow algorithm. The same techniques used to derive this new algorithm also provide a Bayesian interpretation for a normalized version of Winnow.  相似文献   

8.
An ensemble is a group of learners that work together as a committee to solve a problem. The existing ensemble learning algorithms often generate unnecessarily large ensembles, which consume extra computational resource and may degrade the generalization performance. Ensemble pruning algorithms aim to find a good subset of ensemble members to constitute a small ensemble, which saves the computational resource and performs as well as, or better than, the unpruned ensemble. This paper introduces a probabilistic ensemble pruning algorithm by choosing a set of “sparse” combination weights, most of which are zeros, to prune the ensemble. In order to obtain the set of sparse combination weights and satisfy the nonnegative constraint of the combination weights, a left-truncated, nonnegative, Gaussian prior is adopted over every combination weight. Expectation propagation (EP) algorithm is employed to approximate the posterior estimation of the weight vector. The leave-one-out (LOO) error can be obtained as a by-product in the training of EP without extra computation and is a good indication for the generalization error. Therefore, the LOO error is used together with the Bayesian evidence for model selection in this algorithm. An empirical study on several regression and classification benchmark data sets shows that our algorithm utilizes far less component learners but performs as well as, or better than, the unpruned ensemble. Our results are very competitive compared with other ensemble pruning algorithms.  相似文献   

9.

According to the NMC Horizon Report (Johnson et al. in Horizon Report Europe: 2014 Schools Edition, Publications Office of the European Union, The New Media Consortium, Luxembourg, Austin, 2014 [1]), data-driven learning in combination with emerging academic areas such as learning analytics has the potential to tailor students’ education to their needs (Johnson et al. 2014 [1]). Focusing on this aim, this article presents a web-based (training) platform for German-speaking users aged 8–12.Our objective is to support primary-school pupils—especially those who struggle with the acquisition of the German orthography—with an innovative tool to improve their writing and spelling competencies. On this platform, which is free of charge, they can write and publish texts supported by a special feature, called the intelligent dictionary. It gives automatic feedback for correcting mistakes that occurred in the course of fulfilling a meaningful writing task. Consequently, pupils can focus on writing texts and are able to correct texts on their own before publishing them. Additionally, they gain deeper insights in German orthography. Exercises will be recommended for further training based on the spelling mistakes that occurred. This article covers the background to German orthography and its teaching and learning as well as details concerning the requirements for the platform and the user interface design. Further, combined with learning analytics we expect to gain deeper insight into the process of spelling acquisition which will support optimizing our exercises and providing better materials in the long run.

  相似文献   

10.
实现了基本的Winnow算法、Balanced Winnow算法和带反馈学习功能的Winnow算法,并将其成功地应用于大规模垃圾邮件过滤,分别在SEWM2007和SEWM2008数据集上对上述三个算法进行了对比实验.实验结果表明,Winnow算法及其变体在分类效果和效率上都优于Logiisfic算法.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号