首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Cellary  W. Wiza  W. Walczak  K. 《Computer》2004,37(5):87-89
The exponential growth in Web sites is making it increasingly difficult to extract useful information on the Internet using existing search engines. Despite a wide range of sophisticated indexing and data retrieval features, search engines often deliver satisfactory results only when users know precisely what they are looking for. Traditional textual interfaces present results as a list of links to Web pages. Because most users are unwilling to explore an extensive list, search engines arbitrarily reduce the number of links returned, aiming also to provide quick response times. Moreover, their proprietary ranking algorithms often do not reflect individual user preferences. Those who need comprehensive general information about a topic or have vague initial requirements instead want a holistic presentation of data related to their queries. To address this need, we have developed Periscope, a 3D search result visualization system that displays all the Web pages found in a synthetic, yet comprehensible format.  相似文献   

2.
With the Internet growing exponentially, search engines are encountering unprecedented challenges. A focused search engine selectively seeks out web pages that are relevant to user topics. Determining the best strategy to utilize a focused search is a crucial and popular research topic. At present, the rank values of unvisited web pages are computed by considering the hyperlinks (as in the PageRank algorithm), a Vector Space Model and a combination of them, and not by considering the semantic relations between the user topic and unvisited web pages. In this paper, we propose a concept context graph to store the knowledge context based on the user's history of clicked web pages and to guide a focused crawler for the next crawling. The concept context graph provides a novel semantic ranking to guide the web crawler in order to retrieve highly relevant web pages on the user's topic. By computing the concept distance and concept similarity among the concepts of the concept context graph and by matching unvisited web pages with the concept context graph, we compute the rank values of the unvisited web pages to pick out the relevant hyperlinks. Additionally, we constitute the focused crawling system, and we retrieve the precision, recall, average harvest rate, and F-measure of our proposed approach, using Breadth First, Cosine Similarity, the Link Context Graph and the Relevancy Context Graph. The results show that our proposed method outperforms other methods.  相似文献   

3.
因特网的迅速发展对传统的爬行器和搜索引擎提出了巨大的挑战。各种针对特定领域、特定人群的搜索引擎应运而生。Web主题信息搜索系统(网络蜘蛛)是主题搜索引擎的最主要的部分,它的任务是将搜集到的符合要求的Web页面返回给用户或保存在索引库中。Web 上的信息资源如此广泛,如何全面而高效地搜集到感兴趣的内容是网络蜘蛛的研究重点。提出了基于网页分块技术的主题爬行,实验结果表明,相对于其它的爬行算法,提出的算法具有较高的效率、爬准率、爬全率及穿越隧道的能力。  相似文献   

4.
With the advent of technology man is endeavoring for relevant and optimal results from the web through search engines. Retrieval performance can often be improved using several algorithms and methods. Abundance in web has impelled to exert better search systems. Categorization of the web pages abet fairly in addressing this issue. The anatomy of the web pages, links, categorization of text and their relations are empathized with time. Search engines perform critical analysis using several inputs for a keyword(s) to obtain quality results in shortest possible time. Categorization is mostly done with separating the content using the web link structure. We estimated two different page weights (a) Page Retaining Weight (PRW) and (b) Page Forwarding Weight (PFW) for a web page and grouped for categorization. Using these experimental results we classified the web pages into four different groups i.e. (A) Simple type (B) Axis shifted (c) Fluctuated and (d) Oscillating types. Implication in development of such categorization alleviates the performance of search engines and also delves into study of web modeling studies.  相似文献   

5.
There is a significant commercial and research interest in location-based web search engines. Given a number of search keywords and one or more locations (geographical points) that a user is interested in, a location-based web search retrieves and ranks the most textually and spatially relevant web pages. In this type of search, both the spatial and textual information should be indexed. Currently, no efficient index structure exists that can handle both the spatial and textual aspects of data simultaneously and accurately. Existing approaches either index space and text separately or use inefficient hybrid index structures with poor performance and inaccurate results. Moreover, most of these approaches cannot accurately rank web-pages based on a combination of space and text and are not easy to integrate into existing search engines. In this paper, we propose a new index structure called Spatial-Keyword Inverted File for Points to handle point-based indexing of web documents in an integrated/efficient manner. To seamlessly find and rank relevant documents, we develop a new distance measure called spatial tf-idf. We propose four variants of spatial-keyword relevance scores and two algorithms to perform top-k searches. As verified by experiments, our proposed techniques outperform existing index structures in terms of search performance and accuracy.  相似文献   

6.
通过分析Heritirx开源爬虫的组件结构,针对Heritrix开源爬虫项目存在的问题,项目设计了特定的抓取逻辑和定向抓取包含某一特定内容的网页的类,并引入BKDRHash算法进行URL散列,实现了面向特定主题的网页信息搜索,达到了提高搜索数据的效率以及多线程抓取网页的目的。最后对某一特定主题的网页进行分析,并进行网页内容抓取,采用HTMLParser工具将抓取的网页数据源转换成特定的格式,可为面向主题的搜索信息系统以及数据挖掘提供数据源,为下一步研究工作做好准备。  相似文献   

7.
随着互联网技术的飞速发展,网页数量急剧增加,搜索引擎的地位已经不可取代,成为人们使用Internet的入口。网络蜘蛛作为搜索引擎的信息来源是搜索引擎必不可少的组成部分。介绍网络蜘蛛设计中的关键技术。另外,随着用户个性化需求越来越强以及网页数量的急剧增加导致通用搜索引擎无法满足特定用户的需求,专业搜索引擎得到快速的发展。同时对于主题爬虫的研究也有很大的突破和进展。主题爬虫有别于通用爬虫,通用爬虫注重爬取的完整性,而主题爬虫强调网页与特定主题的相关性。同时对主题爬虫的研究现状进行介绍和总结。  相似文献   

8.
A Probabilistic Approach for Distillation and Ranking of Web Pages   总被引:1,自引:0,他引:1  
Greco  Gianluigi  Greco  Sergio  Zumpano  Ester 《World Wide Web》2001,4(3):189-207
A great number of recent papers have investigated the possibility of introducing more effective and efficient algorithms for search engines. In traditional search engines the resulting ranking is carried out using textual information only and, as showed by several works, they are not very useful for extracting relevant information. Present research, instead, takes a new approach, called Topic Distillation, whose main task is finding relevant documents using a different similarity criterion: retrieved documents are those related to the query topic, but which do not necessarily contain the query string. Current algorithms for topic distillation first compute a base set containing all the relevant pages and then, by applying an iterative procedure, obtain the authoritative pages. In this paper, we present a different approach which computes the authoritative pages by analyzing the structure of the base set. The technique applies a statistical approach to the co-citation matrix (of the base set) to find the most co-cited pages and combines a link analysis approach with the content page evaluation. Several experiments have shown the validity of our approach.  相似文献   

9.
《Applied Soft Computing》2007,7(1):398-410
Personalized search engines are important tools for finding web documents for specific users, because they are able to provide the location of information on the WWW as accurately as possible, using efficient methods of data mining and knowledge discovery. The types and features of traditional search engines are various, including support for different functionality and ranking methods. New search engines that use link structures have produced improved search results which can overcome the limitations of conventional text-based search engines. Going a step further, this paper presents a system that provides users with personalized results derived from a search engine that uses link structures. The fuzzy document retrieval system (constructed from a fuzzy concept network based on the user's profile) personalizes the results yielded from link-based search engines with the preferences of the specific user. A preliminary experiment with six subjects indicates that the developed system is capable of searching not only relevant but also personalized web pages, depending on the preferences of the user.  相似文献   

10.
The accuracy of searches for visual data elements, as well as other types of information, depends on the terms used by the user in the input query to retrieve the relevant results and to reduce the irrelevant ones. Most of the results that are returned are relevant to the query terms, but not to their meaning. For example, certain types of web contents hold hidden information that traditional search engines are unable to retrieve. Searching for the mathematical construct of 1/x using Google will not result in the retrieval of the documents that contain the mathematically equivalent expressions (i.e. x?1). Because conventional search engines fall short of providing math-search capabilities. One of these capabilities is the ability of these search engines to detect the mathematical equivalence between users’ quires and math contents. In addition, users sometimes need to use slang terms, either to retrieve slang-based visual data (e.g. social media content) or because they do not know how to write using classical form. To solve such a problem, this paper proposed an AI-based system for analysing multilingual slang web contents so as to allow a user to retrieve web slang contents that are relevant to the user’s query. The proposed system presents an approach for visual data analytics, and it also enables users to analyse hundreds of potential search results/web pages by starting an informed friendly dialogue and presenting innovative answers.  相似文献   

11.
一种提高中文搜索引擎检索质量的HTML解析方法   总被引:15,自引:1,他引:15  
中文搜索引擎经常会返回大量的无关项或者不含具体信息的间接项,产生这类问题的一个原因是网页中存在着大量与主题无关的文字。对使用关键字检索方法的搜索引擎来说,想在检索或者后处理阶段解决这类问题不仅要付出一定代价,而且在大多数情况下是不可能的。在这篇论文中,我们提出了网页噪声的概念,并针对中文网页的特点,实现了一种对网页自动分块并去噪的HTML解析方法,从而达到在预处理阶段消除潜在无关项和间接项的目的。实验结果表明,该方法能够在不占用查询时间的前提下100%地消除中文搜索引擎隐藏的间接项,以及大约11%的无法过滤或隐藏的无关项或间接项,从而大幅度提高检索结果的查准率。  相似文献   

12.
海量网页的存在及其量的急速增长使得通用搜索引擎难以为面向主题或领域的查询提供满意结果。本文研究的主题爬虫致力于收集主题相关信息,达到极大降低网页处理量的目的。它通过评价网页的主题相关度,并优先爬取相关度较高的网页。利用一种基于子空间的语义分析技术,并结合贝叶斯以及支持向量机,设计并实现了一个高效的主题爬虫。实验表明,此算法具有很好的准确性和高效性。  相似文献   

13.
随着CSS+DIV布局方式逐渐成为网页结构布局的主流,对此类网页进行高效的主题信息抽取已成为专业搜索引擎的迫切任务之一。提出一种基于DIV标签树的网页主题信息抽取方法,首先根据DIV标签把HTML文档解析成DIV森林,然后过滤掉DIV标签树中的噪声结点并且建立STU-DIV模型树,最后通过主题相关度分析和剪枝算法,剪掉与主题信息无关的DIV标签树。通过对多个新闻网站的网页进行分析处理,实验证明此方法能够有效地抽取新闻网页的主题信息。  相似文献   

14.
聚焦爬虫搜集与特定主题相关的页面,为搜索引擎构建页面集。传统的聚焦爬虫采用向量空间模型和局部搜索算法,精确率和召回率都比较低。文章分析了聚焦爬虫存在的问题及其相应的解决方法。最后对未来的研究方向进行了展望。  相似文献   

15.
随着CSS+DIV布局方式逐渐成为网页结构布局的主流,对此类网页进行高效的主题信息抽取已成为专业搜索引擎的迫切任务之一。提出一种基于DIV标签树的网页主题信息抽取方法,首先根据DIV标签把HTML文档解析成DIV森林,然后过滤掉DIV标签树中的噪声结点并且建立STU-DIV模型树,最后通过主题相关度分析和剪枝算法,剪掉与主题信息无关的DIV标签树。通过对多个新闻网站的网页进行分析处理,实验证明此方法能够有效地抽取新闻网页的主题信息。  相似文献   

16.
网络数据的飞速增长为搜索引擎带来了巨大的存储和网络服务压力,大量冗余、低质量乃至垃圾数据造成了搜索引擎存储与运算能力的巨大浪费,在这种情况下,如何建立适合万维网实际应用环境的网页数据质量评估体系与评估算法成为了信息检索领域的重要研究课题。在前人工作的基础上,通过网络用户及网页设计人员的参与,文章提出了包括权威知名度、内容、时效性和网页外观呈现四个维度十三个因素的网页质量评价体系;标注数据显示我们的网页质量评价体系具有较强的可操作性,标注结果比较一致;文章最后使用Ordinal Logistic Regression 模型对评价体系的各个维度的重要性进行了分析并得出了一些启发性的结论 互联网网页内容和实效性能否满足用户需求是决定其质量的重要因素。  相似文献   

17.
The use of computers to deliver course-related materials is rapidly expanding in most universities. Yet the effects of computer vs. printed delivery modes on students’ performance and motivation are not yet fully known. We compared the impacts of Web vs. paper to deliver practice quizzes that require information search in lecture notes. Hundred and twenty two undergraduate students used either a web site or printed documents to answer 18 mathematics questions during a tutored session. A revised Web site was designed based on ergonomic criteria, to test the hypothesis that improved usability would decrease time spent on the task, the number of pages consulted, and students’ perceived cognitive load. The group working with printed documents had the highest performance. Furthermore, students perceived the paper materials as less effortful to read, and expressed preference for printing lecture notes and questions. However, students appreciated having a Web site available. No differences were found between the two sites. We conclude that Web delivery imposed higher perceived cognitive load due to the need to read lengthy documents. We suggest possible ways to improve Web-based practice materials, such as simultaneous display of questions and lecture notes.  相似文献   

18.
多跳阅读理解成为近年来自然语言理解领域的研究热点,与简单阅读理解相比,它更加复杂,需要面对如下挑战:(1)结合多处内容线索,如多文档阅读等;(2)具有可解释性,如给出推理路径等。为应对这些挑战,出现了各类不同的工作。因此该文综述了多跳式文本阅读理解这一复杂阅读理解任务,首先给出了多跳文本阅读理解任务的定义;由于推理是多跳阅读理解模型的基础能力,根据推理方式的不同,多跳阅读理解模型可以分为三类:基于结构化推理的多跳阅读理解模型、基于线索抽取的多跳阅读理解模型、基于问题拆分的多跳阅读理解模型,该文接下来比较分析了各类模型在常见多跳阅读理解模型任务数据集上的实验结果,发现这三类模型之间各有优劣。最后探讨了未来的研究方向。  相似文献   

19.
《Ergonomics》2012,55(8):907-921
Navigation of web pages is of critical importance to the usability of web-based systems such as the World Wide Web and intranets. The primary means of navigation is through the use of hyperlinks. However, few studies have examined the impact of the presentation format of these links on visual search. The present study used a two-factor mixed measures design to investigate whether there was an effect of link format (plain text, underlined, bold, or bold and underlined) upon speed and accuracy of visual search and subjective measures in both the navigation and content areas of web pages. An effect of link format on speed of visual search for both hits and correct rejections was found. This effect was observed in the navigation and the content areas. Link format did not influence accuracy in either screen location. Participants showed highest preference for links that were in bold and underlined, regardless of screen area. These results are discussed in the context of visual search processes and design recommendations are given.  相似文献   

20.
基于形式概念的语义网本体的构建与展现   总被引:4,自引:0,他引:4  
作为语义网基础的本体是共享概念模型的明确的形式化规范说明,它提供一种让计算机可以交换、搜寻和认同文字信息的方式。有效地构建、展现本体成为应用本体的关键问题,然而,现有构建本体的各种方法都在不同方面存在着限制。经过分析比较,本文采用形式概念分析理论构造本体阶层来弥补缺陷,并结合机率模式展现本体,用于表达概念之间及概念、资料间的相关性,利用文件与概念的相关性排序结果,以便于用户找到最相关的信息,从而有效地提高了信息查找的效率。本文通过实例来演示本体的构造与表达。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号