首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
Web分类是在分析了网页的内容后,按照一定的规则将它分到一个或者多个合适的类别中去.支持向量机是在统计学习理论基础上发展起来的一种新的非常有效的机器学习方法.由于其出色的学习性能,该技术已成为分类领域新的研究热点.将支持向量机的理论应用到Web分类中,首先对网页进行了预处理,然后对网页文本进行特征提取和向量表示,最后将二叉树多分类支持向量机应用到Web分类中.通过实验对算法进行了验证,结果表明取得了良好的分类效果.  相似文献   

2.
随着Web信息容量迅速膨胀,对Web文本分类已经是目前研究的热点.传统的Web文本分类对网页的预处理基本上没有考虑网页中的大量噪音,因此对分类结果有一定的影响;另一方面,文本的向量空间模型维数过高,对分类效果也存在很大的影响.提出一种基于粗糙集理论的Web文本分类方法,首先对网页进行去噪,然后对向量空间模型进行属性约简,之后构造分类器,实验表明,此方法不仅降低了维数,还提高了分类结果.  相似文献   

3.
随着网络信息资源的迅速增加,对于主题Web文本信息的搜索与分类日益成为信息处理领域的一个重要问题。本文建立了一个面向化工领域的Web文本搜索与分类系统,该系统在crawler子系统搜集Web文档的基础上,利用支持向量机对网页进行二次分类,找出化工专业中文网页;然后利用向量空间模型,对分类好的专业网页进行多子类分类。与综合搜索引擎相比,具有速度快、搜索信息准确度高和具备学习能力的特点。  相似文献   

4.
近年来人们提出了很多新的搜集思想,他们都使用了一个共同的技术——集中式搜集。集中式搜集通过分析搜索的区域,来发现与主题最相关的链接,防止访问网上不相关的区域,这可以大量地节省硬件和网络资源,使网页得到尽快的更新。为了达到这个搜索目标,本文提出了两个算法:一个是基于多层分类的网页过滤算法,试验结果表明,这种算法有较高的准确率,而且分类速度明显高于一般的分类算法;另一个是基于Web结构的URL排序算法,这个算法充分地利用了Web的结构特征和网页的分布特征。  相似文献   

5.
基于投影寻踪的中文网页分类算法   总被引:7,自引:1,他引:7  
随着Web 信息迅猛发展,网络用户对网页自动分类器的需求日益增长。为了提高分类精度,本文提出了一种新的基于投影寻踪(Projection Pursuit , 简称PP) 的中文网页分类算法。我们首先利用遗传算法找到一个最好的投影方向,然后将已被表示成为n 维向量的网页投影到一维空间。最后采用KNN 分类算法对其进行分类。此方法能解决“维数灾难”问题。实验结果表明,我们提出的算法是可行而且是有效的。  相似文献   

6.
粗糙集理论和DT_SVM在Web信息过滤中的应用   总被引:1,自引:0,他引:1       下载免费PDF全文
衣治安  刘杨 《计算机工程》2008,34(15):208-210
针对Web信息过滤问题,提出一种将粗糙集理论和决策树SVM(DT_SVM)相结合进行数据分类、过滤的新方法。该方法运用改进的启发式相对属性约简算法消除冗余、降低样本空间维数,通过聚类和DT_SVM相结合来训练SVM,将多分类问题转化为二值分类问题,提高了训练速度及过滤精度。实验表明,该算法得到了较高的查全率、查准率,体现了将粗糙集理论与DT_SVM算法结合的优越性。  相似文献   

7.
基于粗糙集与贝叶斯决策的不良网页过滤研究   总被引:1,自引:0,他引:1  
不良网页过滤是一种两类网页分类问题。提出了一种基于粗糙集与贝叶斯决策相结合的不良网页分类过滤方法,首先利用粗糙集理论的区分矩阵和区分函数得到网页分类决策的属性约简;然后通过贝叶斯决策理论对网页进行分类与过滤决策。仿真实验表明,该方法在不良网页分类过滤系统中开销小,过滤准确度高,因而在快速过滤不良网页的应用中具有工程应用价值。  相似文献   

8.
网页实时分类是聚焦爬虫需要解决的重要问题,现有主题特征提取方法多数是面向离线分类的,性能达不到应用要求。本文首先扩展了标签树表示模型DocView的节点类型,且将其作为加权的重要因素,然后提出一个面向实时网页分类的Web文本和文本集主题特征提取算法。实验结果表明,算法的准确率提高了31%,主题偏移度降低了1倍多,能够满足应用要求。同时,还提出了一个新的主题特征提取性能评价模型。  相似文献   

9.
一种基于粗糙集的最小约简算法   总被引:4,自引:6,他引:4  
随着计算机技术的发展,急剧产生海量的数据。如何从这些数据中提取有用的信息是一个重要的问题。一种新的数据分析方法——粗糙集理论被提出。该理论在分类的意义下定义了模糊性和不确定性的概念,是一种处理不确定和不精确问题的新型数学工具。文中首先对近年兴起的粗糙集的基本概念进行了叙述,在此基础上运用粗糙集理论提出一种新的约简算法。  相似文献   

10.
一种全自动生成网页信息抽取Wrapper的方法   总被引:6,自引:2,他引:4  
Web网页信息抽取是近年来广泛关注的话题。如何最快最准地从大量Web网页中获取主要数据成为该领域的一个研究重点。文章中提出了一种全自动化生成网页信息抽取Wrapper的方法。该方法充分利用网页设计模版的结构化、层次化特点,运用网页链接分类算法和网页结构分离算法,抽取出网页中各个信息单元,并输出相应Wrapper。利用Wrapper能够对同类网页自动地进行信息抽取。实验结果表明,该方法同时实现了对网页中严格的结构化信息和松散的结构化信息的自动化抽取,抽取结果达到非常高的准确率。  相似文献   

11.
With the development of mobile technology, the users browsing habits are gradually shifted from only information retrieval to active recommendation. The classification mapping algorithm between users interests and web contents has been become more and more difficult with the volume and variety of web pages. Some big news portal sites and social media companies hire more editors to label these new concepts and words, and use the computing servers with larger memory to deal with the massive document classification, based on traditional supervised or semi-supervised machine learning methods. This paper provides an optimized classification algorithm for massive web page classification using semantic networks, such as Wikipedia, WordNet. In this paper, we used Wikipedia data set and initialized a few category entity words as class words. A weight estimation algorithm based on the depth and breadth of Wikipedia network is used to calculate the class weight of all Wikipedia Entity Words. A kinship-relation association based on content similarity of entity was therefore suggested optimizing the unbalance problem when a category node inherited the probability from multiple fathers. The keywords in the web page are extracted from the title and the main text using N-gram with Wikipedia Entity Words, and Bayesian classifier is used to estimate the page class probability. Experimental results showed that the proposed method obtained good scalability, robustness and reliability for massive web pages.  相似文献   

12.
基于支持向量机与无监督聚类相结合的中文网页分类器   总被引:74,自引:0,他引:74  
提出了一种将支持向量机与无监督聚类相结合的新分类算法,给出了一种新的网页表示方法并应用于网页分类问题。该算法首先利用无监督聚类分别对训练集中正例和反例聚类,然后挑选一些例子训练SVM并获得SVM分类器,任何网页可以通过比较其与聚类中心的距离决定采用无监督聚类方法或SVM分类器进行分类。该算法充分利用了SVM准确率高与无监督聚类速度快的优点。实验表明它不仅具有较高的训练效率,而且有很高的精确度。  相似文献   

13.
As the World Wide Web develops at an unprecedented pace, identifying web page genre has recently attracted increasing attention because of its importance in web search. A common approach for identifying genre is to use textual features that can be extracted directly from a web page, that is, On-Page features. The extracted features are subsequently inputted into a machine learning algorithm that will perform classification. However, these approaches may be ineffective when the web page contains limited textual information (e.g., the page is full of images). In this study, we address genre identification of web pages under the aforementioned situation. We propose a framework that uses On-Page features while simultaneously considering information in neighboring pages, that is, the pages that are connected to the original page by backward and forward links. We first introduce a graph-based model called GenreSim, which selects an appropriate set of neighboring pages. We then construct a multiple classifier combination module that utilizes information from the selected neighboring pages and On-Page features to improve performance in genre identification. Experiments are conducted on well-known corpora, and favorable results indicate that our proposed framework is effective, particularly in identifying web pages with limited textual information.  相似文献   

14.
Currently, web spamming is a serious problem for search engines. It not only degrades the quality of search results by intentionally boosting undesirable web pages to users, but also causes the search engine to waste a significant amount of computational and storage resources in manipulating useless information. In this paper, we present a novel ensemble classifier for web spam detection which combines the clonal selection algorithm for feature selection and under-sampling for data balancing. This web spam detection system is called USCS. The USCS ensemble classifiers can automatically sample and select sub-classifiers. First, the system will convert the imbalanced training dataset into several balanced datasets using the under-sampling method. Second, the system will automatically select several optimal feature subsets for each sub-classifier using a customized clonal selection algorithm. Third, the system will build several C4.5 decision tree sub-classifiers from these balanced datasets based on its specified features. Finally, these sub-classifiers will be used to construct an ensemble decision tree classifier which will be applied to classify the examples in the testing data. Experiments on WEBSPAM-UK2006 dataset on the web spam problem show that our proposed approach, the USCS ensemble web spam classifier, contributes significant classification performance compared to several baseline systems and state-of-the-art approaches.  相似文献   

15.
WEB文本自动分类在很多方面都有着重要的应用,如信息检索,新闻分类等。决策树算法是一种简单并且广泛使用的分类方法,具有很多优点如:分类精度高,分类速度快等。主要研究了运用C4.5决策树构建Web页面分类器的基本方法和过程,并提出了一个基于C4.5决策树的Web页面分类器的框架。在此基础上实现了一个运用于网络爬虫的Web页面分类器,实验结果表明该算法是非常有效的。  相似文献   

16.
基于多示例学习的中文Web目录页面推荐   总被引:12,自引:0,他引:12  
黎铭  薛晓冰  周志华 《软件学报》2004,15(9):1328-1335
多示例学习为中文Web挖掘提供了一种新的思路.提出中文Web目录页面推荐这种特殊的Web挖掘任务,并且将其转化为多示例学习问题来解决.在真实世界数据集上的实验结果显示,该方法能够有效地解决该问题.  相似文献   

17.
Several studies have concentrated on the generation of wrappers for web data sources. As wrappers can be easily described as grammars, the grammatical inference heritage could play a significant role in this research field. Recent results have identified a new subclass of regular languages, called prefix mark-up languages, that nicely abstract the structures usually found in HTML pages of large web sites. This class has been proven to be identifiable in the limit, and a PTIME unsupervised learning algorithm has been previously developed. Unfortunately, many real-life web pages do not fall in this class of languages. In this article we analyze the roots of the problem and we propose a technique to transform pages in order to bring them into the class of prefix mark-up languages. In this way, we have a practical solution without renouncing to the formal background defined within the grammatical inference framework. We report on some experiments that we have conducted on real-life web pages to evaluate the approach; the results of this activity demonstrate the effectiveness of the presented techniques.  相似文献   

18.
语义相似的PageRank改进算法   总被引:1,自引:0,他引:1       下载免费PDF全文
PageRank算法是一种用于网页排序的算法,它利用网页间的相互引用关系评价网页的重要性。但由于它只考虑网页与网页之间的链接结构,忽略了网页与主题的相关性,容易造成主题漂移现象。在分析了原PageRank算法基础上,给出了一种基于语义相似度的PageRank改进算法。该算法能够按照网页结构和网页主要内容计算出网页的PageRank值,既不会增加算法的时空复杂度,又极大地减少了“主题漂移”现象,从而提高查询效率和质量。  相似文献   

19.
The number of Internet users and the number of web pages being added to WWW increase dramatically every day.It is therefore required to automatically and e?ciently classify web pages into web directories.This helps the search engines to provide users with relevant and quick retrieval results.As web pages are represented by thousands of features,feature selection helps the web page classifiers to resolve this large scale dimensionality problem.This paper proposes a new feature selection method using Ward s minimum variance measure.This measure is first used to identify clusters of redundant features in a web page.In each cluster,the best representative features are retained and the others are eliminated.Removing such redundant features helps in minimizing the resource utilization during classification.The proposed method of feature selection is compared with other common feature selection methods.Experiments done on a benchmark data set,namely WebKB show that the proposed method performs better than most of the other feature selection methods in terms of reducing the number of features and the classifier modeling time.  相似文献   

20.
该文利用DF与CHI统计量相结合的特征选取方法,针对互联网上对外汉语相关领域的网页进行特征提取,并在此基础上,构建了基于标题与正文相结合的两步式主题相关度判定分类器。基于该分类器做对外汉语相关主题的网页爬取工作,实验表明,效率和召回率比传统分类器都有较大程度的提高,目前该分类器已经用于为大型对外汉语语料库构建提供数据源。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号