首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 187 毫秒
1.
基于字符语言模型的垃圾邮件过滤   总被引:1,自引:1,他引:1  
基于内容的过滤是当前解决垃圾邮件问题的主流技术之一。该文先简单综述了当前基于内容的垃圾邮件过滤中采用的各种技术,在此基础上提出将基于字符的语言模型应用于垃圾邮件过滤任务中,并通过实验对比了该方法与Nave Bayes、SVM和基于词的语言模型方法的性能差异,以及不同n值、不同特征选择方式对过滤结果的影响。实验结果表明,基于字符的语言模型实现简单且具有很高的性能,能较好地满足大规模在线邮件系统的需要,具有很高的实用价值。  相似文献   

2.
介绍现在普遍采用的几种垃圾邮件过滤方法,对基于内容的过滤方法中的贝叶斯算法和Winnow算法进行详细的介绍.目前研究中文垃圾邮件的各类文献都基于不同的语料库,缺乏算法之间的效果比较分析.分别实现贝叶斯和Winnow的改进算法,并对CCERT的一个公开邮件语料库进行测试.测试结果表明,两种算法都达到较好的过滤效果.  相似文献   

3.
基于内容的垃圾邮件过滤问题是Internet安全技术研究的一个重点问题,而基于贝叶斯的分类方法在垃圾邮件处理上表现出了很高的准确度,因此受到了广泛的关注。在朴素贝叶斯算法的基础上,提出了一种基于最小风险贝叶斯方法同Boosting算法相结合的邮件过滤改进算法,提高了分类的精确度。实验证明,算法在邮件过滤中有更好的表现。  相似文献   

4.
一种基于AIS和Bayes网络的垃圾邮件过滤算法   总被引:3,自引:0,他引:3       下载免费PDF全文
将人工免疫系统和贝叶斯网络相结合应用于垃圾邮件的过滤,设计了一个基于AIS和Bayes网络的垃圾邮件过滤算法,分析并解决了该算法实现过程中的关键问题,给出了亲和力的计算方法和抗原/抗体的结构定义。在PU1bare 语料的基础上与Carreras提出的AdaBoost方法进行对比实验,最后给出了实验结果。  相似文献   

5.
针对电子邮件应用中垃圾邮件危害日益严重的问题,基于机器学习的垃圾邮件过滤方法正成为当前互联网应用领域的研究热点之一.通过对现有基于机器学习的垃圾邮件处理方法的分析研究,并结合中文信息处理的特点,提出一种基于支持向量机SVM(Support Vector Machine)的中文垃圾邮件过滤方法并加以设计实现.实验表明,在有限样本情况下,基于SVM的中文垃圾邮件过滤方法具有较高的准确性和稳定性.  相似文献   

6.
目前的图像垃圾邮件过滤技术,大都采用国际上通用的垃圾图像数据集作为训练集,与中国国内图像垃圾邮件的图像特点不一致,图像数据缺乏实时更新,且分类器单一,过滤效果难以保证。针对该问题,在建立国内垃圾邮件图像数据库的基础上,首先提取图像的颜色、纹理和形状特征,再经K-NN分类算法优选出HSV颜色直方图特征对不同分类器进行训练、测试和性能比较,提出将基于粗糙集的K-NN算法、Naive Bayes算法和SVM算法构成的3种基分类器相结合,并基于串行迭代提升的方法形成集成学习的强分类器。该方法可以实现对国内图像垃圾邮件的有效过滤,使图像垃圾邮件过滤的准确率和召回率同时得到提升,分别为97.3%和96.1%,误判率降低到了2.7%。  相似文献   

7.
目前实际应用的垃圾邮件过滤技术效果不太理想,尤其是对垃圾邮件的误判率和漏判率问题较为突出.其中,基于概率统计的简单贝叶斯分类算法相对而言效果较好.为提高垃圾邮件过滤系统的分类准确率和效率,利用网格技术资源高度共享的优势,并对Bayes分类算法的应用模式进行改进,提出了一种基于网格的垃圾邮件过滤系统方案.  相似文献   

8.
基于粗糙集的带决策规则边界的邮件过滤算法   总被引:1,自引:0,他引:1  
针对垃圾邮件过滤的准确率和稳定性不高,以及为了解决邮件过滤算法在语料分类上存在漏报和误报等问题,提出基于粗糙集的带决策规则边界的邮件过滤算法(RARM)。该算法运用粗糙集理论对语料库进行直接分析,并采用启发式方法提出了粗糙集理论的三种不同决策规则的执行计划,确保当邮件内容的词汇语义较为模糊时,仍能保证一定的分类准确度。在实验仿真中,通过与基于支持向量机(SVM)、Ada Boost和贝叶斯分类的邮件过滤算法相比较,该算法在垃圾邮件过滤上的准确率优于对比算法。  相似文献   

9.
王祖辉  姜维 《计算机工程》2009,35(13):188-189,207
针对中英文混合垃圾邮件过滤问题,提出一种基于支持向量机(SVM)的过滤方法和融合多种分类特征的框架.通过改进SVM中线性核的表示方式,解决存储空间和计算最问题.通过领域术语自动抽取技术,增强垃圾邮件过滤的语义单元识别能力,提高垃圾邮件分类性能.在跨语言大规模语料库上的实验表明,采用SVM比采用Good-Turing算法平滑的朴素贝叶斯模型泛化性能提高了6.13%,分类精度比最大熵模型提高了8.18%.  相似文献   

10.
实现了基本的Winnow算法、Balanced Winnow算法和带反馈学习功能的Winnow算法,并将其成功地应用于大规模垃圾邮件过滤,分别在SEWM2007和SEWM2008数据集上对上述三个算法进行了对比实验.实验结果表明,Winnow算法及其变体在分类效果和效率上都优于Logiisfic算法.  相似文献   

11.
The performance of two online linear classifiers—the Perceptron and Littlestone’s Winnow—is explored for two anti-spam filtering benchmark corpora—PU1 and Ling-Spam. We study the performance for varying numbers of features, along with three different feature selection methods: information gain (IG), document frequency (DF) and odds ratio. The size of the training set and the number of training iterations are also investigated for both classifiers. The experimental results show that both the Perceptron and Winnow perform much better when using IG or DF than using odds ratio. It is further demonstrated that when using IG or DF, the classifiers are insensitive to the number of features and the number of training iterations, and not greatly sensitive to the size of training set. Winnow is shown to slightly outperform the Perceptron. It is also demonstrated that both of these online classifiers perform much better than a standard Naïve Bayes method. The theoretical and implementation computational complexity of these two classifiers are very low, and they are very easily adaptively updated. They outperform most of the published results, while being significantly easier to train and adapt. The analysis and promising experimental results indicate that the Perceptron and Winnow are two very competitive classifiers for anti-spam filtering.  相似文献   

12.
Boosting has been shown to improve the predictive performance of unstable learners such as decision trees, but not of stable learners like Support Vector Machines (SVM), k‐nearest neighbors and Naive Bayes classifiers. In addition to the model stability problem, the high time complexity of some stable learners such as SVM prohibits them from generating multiple models to form an ensemble for large data sets. This paper introduces a simple method that not only enables Boosting to improve the predictive performance of stable learners, but also significantly reduces the computational time to generate an ensemble of stable learners such as SVM for large data sets that would otherwise be infeasible. The method proposes to build local models, instead of global models; and it is the first method, to the best of our knowledge, to solve the two problems in Boosting stable learners at the same time. We implement the method by using a decision tree to define local regions and build a local model for each local region. We show that this implementation of the proposed method enables successful Boosting of three types of stable learners: SVM, k‐nearest neighbors and Naive Bayes classifiers.  相似文献   

13.
Single pass text classification by direct feature weighting   总被引:2,自引:1,他引:1  
The Feature Weighting Classifier (FWC) is an efficient multi-class classification algorithm for text data that uses Information Gain to directly estimate per-class feature weights in the classifier. This classifier requires only a single pass over the dataset to compute the feature frequencies per class, is easy to implement, and has memory usage that is linear in the number of features. Results of experiments performed on 128 binary and multi-class text and web datasets show that FWC??s performance is at least comparable to, and often better than that of Naive Bayes, TWCNB, Winnow, Balanced Winnow and linear SVM. On a large-scale web dataset with 12,294 classes and 135,973 training instances, FWC trained in 13?s and yielded comparable classification performance to a state of the art multi-class SVM implementation, which took over 15?min to train.  相似文献   

14.
实体关系自动抽取   总被引:36,自引:7,他引:36  
实体关系抽取是信息抽取领域中的重要研究课题。本文使用两种基于特征向量的机器学习算法,Winnow 和支持向量机(SVM) ,在2004 年ACE(Automatic Content Extraction) 评测的训练数据上进行实体关系抽取实验。两种算法都进行适当的特征选择,当选择每个实体的左右两个词为特征时,达到最好的抽取效果,Winnow和SVM算法的加权平均F-Score 分别为73108 %和73127 %。可见在使用相同的特征集,不同的学习算法进行实体关系的识别时,最终性能差别不大。因此使用自动的方法进行实体关系抽取时,应当集中精力寻找好的特征。  相似文献   

15.
Content-based spam filtering is a binary text categorization problem. To improve the performance of the spam filtering, feature selection, as an important and indispensable means of text categorization, also plays an important role in spam filtering. We proposed a new method, named Bi-Test, which utilizes binomial hypothesis testing to estimate whether the probability of a feature belonging to the spam satisfies a given threshold or not. We have evaluated Bi-Test on six benchmark spam corpora (pu1, pu2, pu3, pua, lingspam and CSDMC2010), using two classification algorithms, Naïve Bayes (NB) and Support Vector Machines (SVM), and compared it with four famous feature selection algorithms (information gain, χ2-statistic, improved Gini index and Poisson distribution). The experiments show that Bi-Test performs significantly better than χ2-statistic and Poisson distribution, and produces comparable performance with information gain and improved Gini index in terms of F1 measure when Naïve Bayes classifier is used; it achieves comparable performance with the other methods when SVM classifier is used. Moreover, Bi-Test executes faster than the other four algorithms.  相似文献   

16.
研究了改进的基于SVM-EM算法融合的朴素贝叶斯文本分类算法以及在垃圾邮件过滤中的应用。针对朴素贝叶斯算法无法处理基于特征组合产生的变化结果,以及过分依赖于样本空间的分布和内在不稳定性的缺陷,造成了算法时间复杂度的增加。为了解决上述问题,提出了一种改进的基于SVM-EM算法的朴素贝叶斯算法,提出的方法充分结合了朴素贝叶斯算法简单高效、EM算法对缺失属性的填补、支持向量机三种算法的优点,首先利用非线性变换和结构风险最小化原则将流量分类转换为二次寻优问题,然后要求EM算法对朴素贝叶斯算法要求条件独立性假设进行填补,最后利用朴素贝叶斯算法过滤邮件,提高分类准确性和稳定性。仿真实验结果表明,与传统的邮件过滤算法相比,该方法能够快速得到最优分类特征子集,大大提高了垃圾邮件过滤的准确率和稳定性。  相似文献   

17.
分类问题,尤其是文本自动分类一直是机器学习与数据挖掘研究中的研究热点与核心技术,其中如朴素贝叶斯、KNN等近年来得到了广泛的关注和快速的发展。文中在统计学理论的基础上给出了一种基于支持向量机方法的文本分类算法,并设计出了相应的垃圾邮件过滤系统。实验证明与朴素贝叶斯方法相比,该算法极大地提高了分类准确率和查全率,具有应用推广的价值。  相似文献   

18.
This paper presents methods of banks discrimination according to the rate of NonPerforming Loans (NPLs), using Gaussian Bayes models and different approaches of multiclass Support Vector Machines (SVM). This classification problem involves many irrelevant variables and comparatively few training instances. New variable selection strategies are proposed. They are based on Gaussian marginal densities for Bayesian models and ranking scores derived from multiclass SVM. The results on both toy data and real-life problem of banks classification demonstrate a significant improvement of prediction performance using only a few variables. Moreover, Support Vector Machines approaches are shown to be superior to Gaussian Bayes models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号