首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 227 毫秒
1.
通过实验对网页结构和特点进行综合分析,给出对网页分块的原则和方法,在分块的基础上根据网页中噪音的出现规则提出了一种消除网页噪音的方法,使搜索引擎对网页的预处理阶段有效消除网页中的无关项和间接项的超连接,从而大大提高了搜索引擎的检索质量.  相似文献   

2.
该文提出了一种从搜索引擎返回的结果网页中获取双语网页的新方法,该方法分为两个任务。第一个任务是自动地检测并收集搜索引擎返回的结果网页中的数据记录。该步骤通过聚类的方法识别出有用的记录摘要并且为下一个任务即高质量双语混合网页的验证及其获取提供有效特征。该文中把双语混合网页的验证看作是有效的分类问题,该方法不依赖于特定领域和搜索引擎。基于从搜索引擎收集并经过人工标注的2 516条检索结果记录,该文提出的方法取得了81.3%的精确率和94.93%的召回率。  相似文献   

3.
针对海量网页数据挖掘问题,提出基于向量空间的网页内容相似计算算法和软件系统框架。利用搜索引擎从海量网页中提取中文编码的网页URL,在此基础上提取网页的中文字符并分析提取出中文实词,建立向量空间模型计算网页内容间的相似度。该系统缩小了需要进行相似度计算的网页文档范围,节约大量时间和空间资源,为网络信息的分类、查询、智能化等奠定了良好的基础。  相似文献   

4.
张晶  曹存根  王石 《计算机科学》2012,39(7):170-174
中文术语及未登录词的翻译是机器翻译、跨语言检索中的一个重要问题,这些翻译很难从现有的词典中获取。提出了一种通过搜索引擎从网页中自动获取中文术语英文翻译的方法。通过术语的部分翻译信息,构造出3种查询项模式,提出了多特征的翻译抽取方法。针对传统方法结果准确率不高、候选翻译干扰项多的问题,提出端类比对齐验证、双语对齐度验证、构词法验证3种验证模型来对候选翻译进行有效验证。实验结果表明,获取的双语翻译对准确率高,TOP1的准确率达到97.4%,TOP3的准确率达到98.3%。  相似文献   

5.
sIFR这项技术是网页设计者用Flash替换HTML页面中的文本元素,并在客户机的浏览器中以Flash影片形式正常呈现出来,设计者可将网页中的文本设置成任意字体,且在浏览者的机器上不需要预先安装这些字体。同时,被Flash替换的文本内容的真实性并没有被隐藏,它可被搜索引擎所定位和检索到,这将不影响网站的推广。本文介绍了sIFR技术在网页设计中如何实现文本替换,这将值得网页设者们借鉴。  相似文献   

6.
从Web中快速、准确地检索出所需信息的迫切需求催生了专业搜索引擎技术。在专业搜索引擎中,网络爬虫(Crawler)负责在Web上搜集特定专业领域的信息,是专业搜索引擎的重要核心部件。该文对中文专业网页的爬取问题进行了研究,基于KL距离验证了网页内容与链接前后文在分布上的差异,在此基础上提出了以链接锚文本及其前后文为特征、Nave Bayes分类器制导的中文专业网页爬取算法,设计了自动获取带链接类标的训练数据的算法。以金融专业网页的爬取为例,分别对所提出的算法进行了离线和在线测试,结果表明,Nave Bayes分类器制导的网络爬虫可以达到近90%的专业网页收割率。  相似文献   

7.
Internet网上资源丰富,搜索引擎更是各领风骚。但是,经常上网的朋友一定深切地感到对于我们中国人而言,用英语进行查询和测览,毕竟不如用中文来得熟练自如。由此引出了两个问题:(1)能不能利用AhaVista搜索引擎查找中文网页?(2)能不能直接利用中文在AhaVista搜索引擎中进行查询?答案是肯定的。至少对于笔者甚为熟悉的AltuVISta来讲,这两点并非难以实现。而且,AhaVista搜索引擎于近日主页更新之际,对原有的中文检索作了进一步的加强,丰富了中文检索的各种手段和功能。如何查找中文Web页面AhaVista搜索引擎允许对查询结果…  相似文献   

8.
《电脑时空》2010,(4):65-65
所谓搜索引擎优化(SearchEngineOptimization,SEO),也就是针对各种搜索引擎的检索特点,让网站建设和网页设计的基本要素适合搜索引擎的检索原则(即搜索引擎友好),从而获得搜索引擎收录并在检索结果中排名靠前。它正在扼杀互联网。在此之前我已经对它投诉多次,但除了抱怨外,现在已经为时已晚,无能为力了。  相似文献   

9.
从Web中快速、准确地检索出所需信息的迫切需求催生了专业搜索引擎技术.在专业搜索引擎中,网络爬虫(Crawler)负责在Web上搜集特定专业领域的信息,是专业搜索引擎的重要核心部件.该文对中文专业网页的爬取问题进行了研究,基于KL距离验证了网页内容与链接前后文在分布上的差异,在此基础上提出了以链接锚文本及其前后文为特征、Naive Bayes分类器制导的中文专业网页爬取算法,设计了自动获取带链接类标的训练数据的算法.以金融专业网页的爬取为例,分别对所提出的算法进行了离线和在线测试,结果表明,Naive Bayes分类器制导的网络爬虫可以达到近90%的专业网页收割率.  相似文献   

10.
搜索引擎索引网页集合选取方法研究   总被引:2,自引:0,他引:2  
随着互联网的快速发展,网页数量呈现爆炸式增长,其中充斥着大量内容相似的或低质量的网页.对于搜索引擎来讲,索引这样的网页对于检索效果并没有显著作用,反而增加了搜索引擎索引和检索的负担.提出一种用于海量网页数据中构建搜索引擎的索引网页集合的网页选取算法.一方面使用基于内容签名的聚类算法对网页进行滤重,压缩索引集合的规模;另一方面融合了网页维度和用户维度的多种特征来保证索引集合的网页质量.相关实验表明,使用该选取算法得到的索引网页集合的规模只有整个网页集合的约1/3,并且能够覆盖绝大多数的用户点击,可以满足实际用户需求.  相似文献   

11.
To avoid returning irrelevant web pages for search engine results, technologies that match user queries to web pages have been widely developed. In this study, web pages for search engine results are classified as low-adjacence (each web page includes all query keywords) or high-adjacence (each web page includes some of the query keywords) sets. To match user queries with web pages using formal concept analysis (FCA), a concept lattice of the low-adjacence set is defined and the non-redundancy association rules defined by Zaki for the concept lattice are extended. OR- and AND-RULEs between non-query and query keywords are proposed and an algorithm and mining method for these rules are proposed for the concept lattice. The time complexity of the algorithm is polynomial. An example illustrates the basic steps of the algorithm. Experimental and real application results demonstrate that the algorithm is effective.  相似文献   

12.
Web Search is increasingly entity centric; as a large fraction of common queries target specific entities, search results get progressively augmented with semi-structured and multimedia information about those entities. However, search over personal web browsing history still revolves around keyword-search mostly. In this paper, we present a novel approach to answer queries over web browsing logs that takes into account entities appearing in the web pages, user activities, as well as temporal information. Our system, B-hist, aims at providing web users with an effective tool for searching and accessing information they previously looked up on the web by supporting multiple ways of filtering results using clustering and entity-centric search. In the following, we present our system and motivate our User Interface (UI) design choices by detailing the results of a survey on web browsing and history search. In addition, we present an empirical evaluation of our entity-based approach used to cluster web pages.  相似文献   

13.
14.
With the Internet growing exponentially, search engines are encountering unprecedented challenges. A focused search engine selectively seeks out web pages that are relevant to user topics. Determining the best strategy to utilize a focused search is a crucial and popular research topic. At present, the rank values of unvisited web pages are computed by considering the hyperlinks (as in the PageRank algorithm), a Vector Space Model and a combination of them, and not by considering the semantic relations between the user topic and unvisited web pages. In this paper, we propose a concept context graph to store the knowledge context based on the user's history of clicked web pages and to guide a focused crawler for the next crawling. The concept context graph provides a novel semantic ranking to guide the web crawler in order to retrieve highly relevant web pages on the user's topic. By computing the concept distance and concept similarity among the concepts of the concept context graph and by matching unvisited web pages with the concept context graph, we compute the rank values of the unvisited web pages to pick out the relevant hyperlinks. Additionally, we constitute the focused crawling system, and we retrieve the precision, recall, average harvest rate, and F-measure of our proposed approach, using Breadth First, Cosine Similarity, the Link Context Graph and the Relevancy Context Graph. The results show that our proposed method outperforms other methods.  相似文献   

15.
Huge amounts of various web items (e.g., images, keywords, and web pages) are being made available on the Web. The popularity of such web items continuously changes over time, and mining for temporal patterns in the popularity of web items is an important problem that is useful for several Web applications; for example, the temporal patterns in the popularity of web search keywords help web search enterprises predict future popular keywords, thus enabling them to make price decisions when marketing search keywords to advertisers. However, the presence of millions of web items makes it difficult to scale up previous techniques for this problem. This paper proposes an efficient method for mining temporal patterns in the popularity of web items. We treat the popularity of web items as time-series and propose a novel measure, a gap measure, to quantify the dissimilarity between the popularity of two web items. To reduce the computational overhead for this measure, an efficient method using the Discrete Fourier Transform (DFT) is presented. We assume that the popularity of web items is not necessarily periodic. For finding clusters of web items with similar popularity trends, we show the limitations of traditional clustering approaches and propose a scalable, efficient, density-based clustering algorithm using the gap measure. Our experiments using the popularity trends of web search keywords obtained from the Google Trends web site illustrate the scalability and usefulness of the proposed approach in real-world applications.  相似文献   

16.
Web页面主题信息抽取研究与实现   总被引:5,自引:0,他引:5  
Web页面中的主要信息通常隐藏在大量无关的特征中,如无关紧要的图片和不相关的连接,使用户不能迅速获取主题信息,限制了Web的可用性。论文提出一种网页主题内容提取的方法及相应算法,并通过人工判定的方法对来自120个网站的5000个网页进行了测试和评估。实验结果表明该方法切实可行,可达到91.35%的准确率。  相似文献   

17.
The accuracy of searches for visual data elements, as well as other types of information, depends on the terms used by the user in the input query to retrieve the relevant results and to reduce the irrelevant ones. Most of the results that are returned are relevant to the query terms, but not to their meaning. For example, certain types of web contents hold hidden information that traditional search engines are unable to retrieve. Searching for the mathematical construct of 1/x using Google will not result in the retrieval of the documents that contain the mathematically equivalent expressions (i.e. x?1). Because conventional search engines fall short of providing math-search capabilities. One of these capabilities is the ability of these search engines to detect the mathematical equivalence between users’ quires and math contents. In addition, users sometimes need to use slang terms, either to retrieve slang-based visual data (e.g. social media content) or because they do not know how to write using classical form. To solve such a problem, this paper proposed an AI-based system for analysing multilingual slang web contents so as to allow a user to retrieve web slang contents that are relevant to the user’s query. The proposed system presents an approach for visual data analytics, and it also enables users to analyse hundreds of potential search results/web pages by starting an informed friendly dialogue and presenting innovative answers.  相似文献   

18.
随着CSS+DIV布局方式逐渐成为网页结构布局的主流,对此类网页进行高效的主题信息抽取已成为专业搜索引擎的迫切任务之一。提出一种基于DIV标签树的网页主题信息抽取方法,首先根据DIV标签把HTML文档解析成DIV森林,然后过滤掉DIV标签树中的噪声结点并且建立STU-DIV模型树,最后通过主题相关度分析和剪枝算法,剪掉与主题信息无关的DIV标签树。通过对多个新闻网站的网页进行分析处理,实验证明此方法能够有效地抽取新闻网页的主题信息。  相似文献   

19.
随着CSS+DIV布局方式逐渐成为网页结构布局的主流,对此类网页进行高效的主题信息抽取已成为专业搜索引擎的迫切任务之一。提出一种基于DIV标签树的网页主题信息抽取方法,首先根据DIV标签把HTML文档解析成DIV森林,然后过滤掉DIV标签树中的噪声结点并且建立STU-DIV模型树,最后通过主题相关度分析和剪枝算法,剪掉与主题信息无关的DIV标签树。通过对多个新闻网站的网页进行分析处理,实验证明此方法能够有效地抽取新闻网页的主题信息。  相似文献   

20.
基于搜索引擎的知识发现   总被引:3,自引:0,他引:3  
数据挖掘一般用于高度结构化的大型数据库,以发现其中所蕴含的知识。随着在线文本的增多,其中所蕴含的知识也越来越丰富,但是,它们却难以被分析利用。因而,研究一套行之有效的方案发现文本中所蕴含的知识是非常重要的,也是当前重要的研究课题。该文利用搜索引擎Google获取相关Web页面,进行过滤和清洗后得到相关文本,然后,进行文本聚类,利用Episode进行事件识别和信息抽取,数据集成及数据挖掘,从而实现知识发现。最后给出了原型系统,对知识发现进行实践检验,收到了很好的效果。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号