共查询到20条相似文献,搜索用时 109 毫秒
1.
2.
随着Web信息容量迅速膨胀,对Web文本分类已经是目前研究的热点.传统的Web文本分类对网页的预处理基本上没有考虑网页中的大量噪音,因此对分类结果有一定的影响;另一方面,文本的向量空间模型维数过高,对分类效果也存在很大的影响.提出一种基于粗糙集理论的Web文本分类方法,首先对网页进行去噪,然后对向量空间模型进行属性约简,之后构造分类器,实验表明,此方法不仅降低了维数,还提高了分类结果. 相似文献
3.
随着网络信息资源的迅速增加,对于主题Web文本信息的搜索与分类日益成为信息处理领域的一个重要问题。本文建立了一个面向化工领域的Web文本搜索与分类系统,该系统在crawler子系统搜集Web文档的基础上,利用支持向量机对网页进行二次分类,找出化工专业中文网页;然后利用向量空间模型,对分类好的专业网页进行多子类分类。与综合搜索引擎相比,具有速度快、搜索信息准确度高和具备学习能力的特点。 相似文献
4.
5.
6.
7.
基于粗糙集与贝叶斯决策的不良网页过滤研究 总被引:1,自引:0,他引:1
不良网页过滤是一种两类网页分类问题。提出了一种基于粗糙集与贝叶斯决策相结合的不良网页分类过滤方法,首先利用粗糙集理论的区分矩阵和区分函数得到网页分类决策的属性约简;然后通过贝叶斯决策理论对网页进行分类与过滤决策。仿真实验表明,该方法在不良网页分类过滤系统中开销小,过滤准确度高,因而在快速过滤不良网页的应用中具有工程应用价值。 相似文献
8.
网页实时分类是聚焦爬虫需要解决的重要问题,现有主题特征提取方法多数是面向离线分类的,性能达不到应用要求。本文首先扩展了标签树表示模型DocView的节点类型,且将其作为加权的重要因素,然后提出一个面向实时网页分类的Web文本和文本集主题特征提取算法。实验结果表明,算法的准确率提高了31%,主题偏移度降低了1倍多,能够满足应用要求。同时,还提出了一个新的主题特征提取性能评价模型。 相似文献
9.
一种基于粗糙集的最小约简算法 总被引:4,自引:6,他引:4
随着计算机技术的发展,急剧产生海量的数据。如何从这些数据中提取有用的信息是一个重要的问题。一种新的数据分析方法——粗糙集理论被提出。该理论在分类的意义下定义了模糊性和不确定性的概念,是一种处理不确定和不精确问题的新型数学工具。文中首先对近年兴起的粗糙集的基本概念进行了叙述,在此基础上运用粗糙集理论提出一种新的约简算法。 相似文献
10.
一种全自动生成网页信息抽取Wrapper的方法 总被引:6,自引:2,他引:4
Web网页信息抽取是近年来广泛关注的话题。如何最快最准地从大量Web网页中获取主要数据成为该领域的一个研究重点。文章中提出了一种全自动化生成网页信息抽取Wrapper的方法。该方法充分利用网页设计模版的结构化、层次化特点,运用网页链接分类算法和网页结构分离算法,抽取出网页中各个信息单元,并输出相应Wrapper。利用Wrapper能够对同类网页自动地进行信息抽取。实验结果表明,该方法同时实现了对网页中严格的结构化信息和松散的结构化信息的自动化抽取,抽取结果达到非常高的准确率。 相似文献
11.
With the development of mobile technology, the users browsing habits are gradually shifted from only information retrieval to active recommendation. The classification mapping algorithm between users interests and web contents has been become more and more difficult with the volume and variety of web pages. Some big news portal sites and social media companies hire more editors to label these new concepts and words, and use the computing servers with larger memory to deal with the massive document classification, based on traditional supervised or semi-supervised machine learning methods. This paper provides an optimized classification algorithm for massive web page classification using semantic networks, such as Wikipedia, WordNet. In this paper, we used Wikipedia data set and initialized a few category entity words as class words. A weight estimation algorithm based on the depth and breadth of Wikipedia network is used to calculate the class weight of all Wikipedia Entity Words. A kinship-relation association based on content similarity of entity was therefore suggested optimizing the unbalance problem when a category node inherited the probability from multiple fathers. The keywords in the web page are extracted from the title and the main text using N-gram with Wikipedia Entity Words, and Bayesian classifier is used to estimate the page class probability. Experimental results showed that the proposed method obtained good scalability, robustness and reliability for massive web pages. 相似文献
12.
13.
As the World Wide Web develops at an unprecedented pace, identifying web page genre has recently attracted increasing attention because of its importance in web search. A common approach for identifying genre is to use textual features that can be extracted directly from a web page, that is, On-Page features. The extracted features are subsequently inputted into a machine learning algorithm that will perform classification. However, these approaches may be ineffective when the web page contains limited textual information (e.g., the page is full of images). In this study, we address genre identification of web pages under the aforementioned situation. We propose a framework that uses On-Page features while simultaneously considering information in neighboring pages, that is, the pages that are connected to the original page by backward and forward links. We first introduce a graph-based model called GenreSim, which selects an appropriate set of neighboring pages. We then construct a multiple classifier combination module that utilizes information from the selected neighboring pages and On-Page features to improve performance in genre identification. Experiments are conducted on well-known corpora, and favorable results indicate that our proposed framework is effective, particularly in identifying web pages with limited textual information. 相似文献
14.
Xiao-Yong Lu Mu-Sheng Chen Jheng-Long Wu Pei-Chan Chang Meng-Hui Chen 《Pattern Analysis & Applications》2018,21(3):741-754
Currently, web spamming is a serious problem for search engines. It not only degrades the quality of search results by intentionally boosting undesirable web pages to users, but also causes the search engine to waste a significant amount of computational and storage resources in manipulating useless information. In this paper, we present a novel ensemble classifier for web spam detection which combines the clonal selection algorithm for feature selection and under-sampling for data balancing. This web spam detection system is called USCS. The USCS ensemble classifiers can automatically sample and select sub-classifiers. First, the system will convert the imbalanced training dataset into several balanced datasets using the under-sampling method. Second, the system will automatically select several optimal feature subsets for each sub-classifier using a customized clonal selection algorithm. Third, the system will build several C4.5 decision tree sub-classifiers from these balanced datasets based on its specified features. Finally, these sub-classifiers will be used to construct an ensemble decision tree classifier which will be applied to classify the examples in the testing data. Experiments on WEBSPAM-UK2006 dataset on the web spam problem show that our proposed approach, the USCS ensemble web spam classifier, contributes significant classification performance compared to several baseline systems and state-of-the-art approaches. 相似文献
15.
16.
17.
Valter Crescenzi 《Applied Artificial Intelligence》2013,27(1-2):21-52
Several studies have concentrated on the generation of wrappers for web data sources. As wrappers can be easily described as grammars, the grammatical inference heritage could play a significant role in this research field. Recent results have identified a new subclass of regular languages, called prefix mark-up languages, that nicely abstract the structures usually found in HTML pages of large web sites. This class has been proven to be identifiable in the limit, and a PTIME unsupervised learning algorithm has been previously developed. Unfortunately, many real-life web pages do not fall in this class of languages. In this article we analyze the roots of the problem and we propose a technique to transform pages in order to bring them into the class of prefix mark-up languages. In this way, we have a practical solution without renouncing to the formal background defined within the grammatical inference framework. We report on some experiments that we have conducted on real-life web pages to evaluate the approach; the results of this activity demonstrate the effectiveness of the presented techniques. 相似文献
18.
PageRank算法是一种用于网页排序的算法,它利用网页间的相互引用关系评价网页的重要性。但由于它只考虑网页与网页之间的链接结构,忽略了网页与主题的相关性,容易造成主题漂移现象。在分析了原PageRank算法基础上,给出了一种基于语义相似度的PageRank改进算法。该算法能够按照网页结构和网页主要内容计算出网页的PageRank值,既不会增加算法的时空复杂度,又极大地减少了“主题漂移”现象,从而提高查询效率和质量。 相似文献
19.
The number of Internet users and the number of web pages being added to WWW increase dramatically every day.It is therefore required to automatically and e?ciently classify web pages into web directories.This helps the search engines to provide users with relevant and quick retrieval results.As web pages are represented by thousands of features,feature selection helps the web page classifiers to resolve this large scale dimensionality problem.This paper proposes a new feature selection method using Ward s minimum variance measure.This measure is first used to identify clusters of redundant features in a web page.In each cluster,the best representative features are retained and the others are eliminated.Removing such redundant features helps in minimizing the resource utilization during classification.The proposed method of feature selection is compared with other common feature selection methods.Experiments done on a benchmark data set,namely WebKB show that the proposed method performs better than most of the other feature selection methods in terms of reducing the number of features and the classifier modeling time. 相似文献
20.
该文利用DF与CHI统计量相结合的特征选取方法,针对互联网上对外汉语相关领域的网页进行特征提取,并在此基础上,构建了基于标题与正文相结合的两步式主题相关度判定分类器。基于该分类器做对外汉语相关主题的网页爬取工作,实验表明,效率和召回率比传统分类器都有较大程度的提高,目前该分类器已经用于为大型对外汉语语料库构建提供数据源。 相似文献