共查询到20条相似文献,搜索用时 227 毫秒
1.
2.
3.
4.
5.
6.
从Web中快速、准确地检索出所需信息的迫切需求催生了专业搜索引擎技术。在专业搜索引擎中,网络爬虫(Crawler)负责在Web上搜集特定专业领域的信息,是专业搜索引擎的重要核心部件。该文对中文专业网页的爬取问题进行了研究,基于KL距离验证了网页内容与链接前后文在分布上的差异,在此基础上提出了以链接锚文本及其前后文为特征、Nave Bayes分类器制导的中文专业网页爬取算法,设计了自动获取带链接类标的训练数据的算法。以金融专业网页的爬取为例,分别对所提出的算法进行了离线和在线测试,结果表明,Nave Bayes分类器制导的网络爬虫可以达到近90%的专业网页收割率。 相似文献
7.
陆建平 《电脑技术——Hello-IT》1998,(7):49-52
Internet网上资源丰富,搜索引擎更是各领风骚。但是,经常上网的朋友一定深切地感到对于我们中国人而言,用英语进行查询和测览,毕竟不如用中文来得熟练自如。由此引出了两个问题:(1)能不能利用AhaVista搜索引擎查找中文网页?(2)能不能直接利用中文在AhaVista搜索引擎中进行查询?答案是肯定的。至少对于笔者甚为熟悉的AltuVISta来讲,这两点并非难以实现。而且,AhaVista搜索引擎于近日主页更新之际,对原有的中文检索作了进一步的加强,丰富了中文检索的各种手段和功能。如何查找中文Web页面AhaVista搜索引擎允许对查询结果… 相似文献
8.
9.
从Web中快速、准确地检索出所需信息的迫切需求催生了专业搜索引擎技术.在专业搜索引擎中,网络爬虫(Crawler)负责在Web上搜集特定专业领域的信息,是专业搜索引擎的重要核心部件.该文对中文专业网页的爬取问题进行了研究,基于KL距离验证了网页内容与链接前后文在分布上的差异,在此基础上提出了以链接锚文本及其前后文为特征、Naive Bayes分类器制导的中文专业网页爬取算法,设计了自动获取带链接类标的训练数据的算法.以金融专业网页的爬取为例,分别对所提出的算法进行了离线和在线测试,结果表明,Naive Bayes分类器制导的网络爬虫可以达到近90%的专业网页收割率. 相似文献
10.
搜索引擎索引网页集合选取方法研究 总被引:2,自引:0,他引:2
随着互联网的快速发展,网页数量呈现爆炸式增长,其中充斥着大量内容相似的或低质量的网页.对于搜索引擎来讲,索引这样的网页对于检索效果并没有显著作用,反而增加了搜索引擎索引和检索的负担.提出一种用于海量网页数据中构建搜索引擎的索引网页集合的网页选取算法.一方面使用基于内容签名的聚类算法对网页进行滤重,压缩索引集合的规模;另一方面融合了网页维度和用户维度的多种特征来保证索引集合的网页质量.相关实验表明,使用该选取算法得到的索引网页集合的规模只有整个网页集合的约1/3,并且能够覆盖绝大多数的用户点击,可以满足实际用户需求. 相似文献
11.
To avoid returning irrelevant web pages for search engine results, technologies that match user queries to web pages have been widely developed. In this study, web pages for search engine results are classified as low-adjacence (each web page includes all query keywords) or high-adjacence (each web page includes some of the query keywords) sets. To match user queries with web pages using formal concept analysis (FCA), a concept lattice of the low-adjacence set is defined and the non-redundancy association rules defined by Zaki for the concept lattice are extended. OR- and AND-RULEs between non-query and query keywords are proposed and an algorithm and mining method for these rules are proposed for the concept lattice. The time complexity of the algorithm is polynomial. An example illustrates the basic steps of the algorithm. Experimental and real application results demonstrate that the algorithm is effective. 相似文献
12.
Web Search is increasingly entity centric; as a large fraction of common queries target specific entities, search results get progressively augmented with semi-structured and multimedia information about those entities. However, search over personal web browsing history still revolves around keyword-search mostly. In this paper, we present a novel approach to answer queries over web browsing logs that takes into account entities appearing in the web pages, user activities, as well as temporal information. Our system, B-hist, aims at providing web users with an effective tool for searching and accessing information they previously looked up on the web by supporting multiple ways of filtering results using clustering and entity-centric search. In the following, we present our system and motivate our User Interface (UI) design choices by detailing the results of a survey on web browsing and history search. In addition, we present an empirical evaluation of our entity-based approach used to cluster web pages. 相似文献
13.
14.
With the Internet growing exponentially, search engines are encountering unprecedented challenges. A focused search engine selectively seeks out web pages that are relevant to user topics. Determining the best strategy to utilize a focused search is a crucial and popular research topic. At present, the rank values of unvisited web pages are computed by considering the hyperlinks (as in the PageRank algorithm), a Vector Space Model and a combination of them, and not by considering the semantic relations between the user topic and unvisited web pages. In this paper, we propose a concept context graph to store the knowledge context based on the user's history of clicked web pages and to guide a focused crawler for the next crawling. The concept context graph provides a novel semantic ranking to guide the web crawler in order to retrieve highly relevant web pages on the user's topic. By computing the concept distance and concept similarity among the concepts of the concept context graph and by matching unvisited web pages with the concept context graph, we compute the rank values of the unvisited web pages to pick out the relevant hyperlinks. Additionally, we constitute the focused crawling system, and we retrieve the precision, recall, average harvest rate, and F-measure of our proposed approach, using Breadth First, Cosine Similarity, the Link Context Graph and the Relevancy Context Graph. The results show that our proposed method outperforms other methods. 相似文献
15.
Huge amounts of various web items (e.g., images, keywords, and web pages) are being made available on the Web. The popularity of such web items continuously changes over time, and mining for temporal patterns in the popularity of web items is an important problem that is useful for several Web applications; for example, the temporal patterns in the popularity of web search keywords help web search enterprises predict future popular keywords, thus enabling them to make price decisions when marketing search keywords to advertisers. However, the presence of millions of web items makes it difficult to scale up previous techniques for this problem. This paper proposes an efficient method for mining temporal patterns in the popularity of web items. We treat the popularity of web items as time-series and propose a novel measure, a gap measure, to quantify the dissimilarity between the popularity of two web items. To reduce the computational overhead for this measure, an efficient method using the Discrete Fourier Transform (DFT) is presented. We assume that the popularity of web items is not necessarily periodic. For finding clusters of web items with similar popularity trends, we show the limitations of traditional clustering approaches and propose a scalable, efficient, density-based clustering algorithm using the gap measure. Our experiments using the popularity trends of web search keywords obtained from the Google Trends web site illustrate the scalability and usefulness of the proposed approach in real-world applications. 相似文献
16.
Web页面主题信息抽取研究与实现 总被引:5,自引:0,他引:5
Web页面中的主要信息通常隐藏在大量无关的特征中,如无关紧要的图片和不相关的连接,使用户不能迅速获取主题信息,限制了Web的可用性。论文提出一种网页主题内容提取的方法及相应算法,并通过人工判定的方法对来自120个网站的5000个网页进行了测试和评估。实验结果表明该方法切实可行,可达到91.35%的准确率。 相似文献
17.
Qusai Q. Abuein Mohammed Q. Shatnawi Muneer Bani Yassein Reem Mahafza 《Multimedia Tools and Applications》2018,77(14):17557-17571
The accuracy of searches for visual data elements, as well as other types of information, depends on the terms used by the user in the input query to retrieve the relevant results and to reduce the irrelevant ones. Most of the results that are returned are relevant to the query terms, but not to their meaning. For example, certain types of web contents hold hidden information that traditional search engines are unable to retrieve. Searching for the mathematical construct of 1/x using Google will not result in the retrieval of the documents that contain the mathematically equivalent expressions (i.e. x?1). Because conventional search engines fall short of providing math-search capabilities. One of these capabilities is the ability of these search engines to detect the mathematical equivalence between users’ quires and math contents. In addition, users sometimes need to use slang terms, either to retrieve slang-based visual data (e.g. social media content) or because they do not know how to write using classical form. To solve such a problem, this paper proposed an AI-based system for analysing multilingual slang web contents so as to allow a user to retrieve web slang contents that are relevant to the user’s query. The proposed system presents an approach for visual data analytics, and it also enables users to analyse hundreds of potential search results/web pages by starting an informed friendly dialogue and presenting innovative answers. 相似文献
18.
19.
20.
基于搜索引擎的知识发现 总被引:3,自引:0,他引:3
数据挖掘一般用于高度结构化的大型数据库,以发现其中所蕴含的知识。随着在线文本的增多,其中所蕴含的知识也越来越丰富,但是,它们却难以被分析利用。因而,研究一套行之有效的方案发现文本中所蕴含的知识是非常重要的,也是当前重要的研究课题。该文利用搜索引擎Google获取相关Web页面,进行过滤和清洗后得到相关文本,然后,进行文本聚类,利用Episode进行事件识别和信息抽取,数据集成及数据挖掘,从而实现知识发现。最后给出了原型系统,对知识发现进行实践检验,收到了很好的效果。 相似文献