首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
搜索引擎结果聚类算法研究   总被引:6,自引:1,他引:5  
随着Web文档数量的剧增,搜索引擎也暴露了许多问题,用户不得不在搜索引擎返回的大量文档摘要列表中查找。而对搜索引擎结果聚类能使用户在更高的主题层次上来查看搜索引擎返回的结果。该文提出了搜索引擎结果聚类的几个重要指标并给出了一个新的基于PAT—tree的搜索引擎结果聚类算法。  相似文献   

2.
Queries to Web search engines are usually short and ambiguous, which provides insufficient information needs of users for effectively retrieving relevant Web pages. To address this problem, query suggestion is implemented by most search engines. However, existing methods do not leverage the contradiction between accuracy and computation complexity appropriately (e.g. Google's ‘Search related to’ and Yahoo's ‘Also Try’). In this paper, the recommended words are extracted from the search results of the query, which guarantees the real time of query suggestion properly. A scheme for ranking words based on semantic similarity presents a list of words as the query suggestion results, which ensures the accuracy of query suggestion. Moreover, the experimental results show that the proposed method significantly improves the quality of query suggestion over some popular Web search engines (e.g. Google and Yahoo). Finally, an offline experiment that compares the accuracy of snippets in capturing the number of words in a document is performed, which increases the confidence of the method proposed by the paper. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

3.
Semantic similarity measures play important roles in many Web‐related tasks such as Web browsing and query suggestion. Because taxonomy‐based methods can not deal with continually emerging words, recently Web‐based methods have been proposed to solve this problem. Because of the noise and redundancy hidden in the Web data, robustness and accuracy are still challenges. In this paper, we propose a method integrating page counts and snippets returned by Web search engines. Then, the semantic snippets and the number of search results are used to remove noise and redundancy in the Web snippets (‘Web‐snippet’ includes the title, summary, and URL of a Web page returned by a search engine). After that, a method integrating page counts, semantics snippets, and the number of already displayed search results are proposed. The proposed method does not need any human annotated knowledge (e.g., ontologies), and can be applied Web‐related tasks (e.g., query suggestion) easily. A correlation coefficient of 0.851 against Rubenstein–Goodenough benchmark dataset shows that the proposed method outperforms the existing Web‐based methods by a wide margin. Moreover, the proposed semantic similarity measure significantly improves the quality of query suggestion against some page counts based methods. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

4.
Search engines retrieve and rank Web pages which are not only relevant to a query but also important or popular for the users. This popularity has been studied by analysis of the links between Web resources. Link-based page ranking models such as PageRank and HITS assign a global weight to each page regardless of its location. This popularity measurement has shown successful on general search engines. However unlike general search engines, location-based search engines should retrieve and rank higher the pages which are more popular locally. The best results for a location-based query are those which are not only relevant to the topic but also popular with or cited by local users. Current ranking models are often less effective for these queries since they are unable to estimate the local popularity. We offer a model for calculating the local popularity of Web resources using back link locations. Our model automatically assigns correct locations to the links and content and uses them to calculate new geo-rank scores for each page. The experiments show more accurate geo-ranking of search engine results when this model is used for processing location-based queries.  相似文献   

5.
Gae-won You 《Information Sciences》2008,178(20):3925-3942
As data of an unprecedented scale are becoming accessible on the Web, personalization, of narrowing down the retrieval to meet the user-specific information needs, is becoming more and more critical. For instance, while web search engines traditionally retrieve the same results for all users, they began to offer beta services to personalize the results to adapt to user-specific contexts such as prior search history or other application contexts. In a clear contrast to search engines dealing with unstructured text data, this paper studies how to enable such personalization in the context of structured data retrieval. In particular, we adopt contextual ranking model to formalize personalization as a cost-based optimization over collected contextual rankings. With this formalism, personalization can be abstracted as a cost-optimal retrieval of contextual ranking, closely matching user-specific retrieval context. With the retrieved matching context, we adopt a machine learning approach, to effectively and efficiently identify the ideal personalized ranked results for this specific user. Our empirical evaluations over synthetic and real-life data validate both the efficiency and effectiveness of our framework.  相似文献   

6.
When a query is passed to multiple search engines, each search engine returns a ranked list of documents. Researchers have demonstrated that combining results, in the form of a “metasearch engine”, produces a significant improvement in coverage and search effectiveness. This paper proposes a linear programming mathematical model for optimizing the ranked list result of a given group of Web search engines for an issued query. An application with a numerical illustration shows the advantages of the proposed method.  相似文献   

7.
张祥  瞿裕忠 《计算机科学》2008,35(2):196-200
网页排序算法的好坏很大程度上影响了万维网搜索引擎的用户体验.语义网为万维网带来了机器可理解的资源描述信息,也为搜索引擎带来了更大的挑战:搜索引擎的检索和排序的对象将不再局限于网页,而是包括了任何可以由URI唯一标识的对象,比如本体、本体中的词汇等等.本文介绍了语义网中不同的排序问题和目前已有的一些算法,并展望了语义网未来面临的排序问题和可能的解决方法.  相似文献   

8.
Keyword‐based search engines such as Google? index Web pages for human consumption. Sophisticated as such engines have become, surveys indicate almost 25% of Web searchers are unable to find useful results in the first set of URLs returned (Technology Review, March 2004). The lack of machine‐interpretable information on the Web limits software agents from matching human searches to desirable results. Tim Berners‐Lee, inventor of the Web, has architected the Semantic Web in which machine‐interpretable information provides an automated means to traversing the Web. A necessary cornerstone application is the search engine capable of bringing the Semantic Web together into a searchable landscape. We implemented a Semantic Web Search Engine (SWSE) that performs semantic search, providing predictable and accurate results to queries. To compare keyword search to semantic search, we constructed the Google CruciVerbalist (GCV), which solves crossword puzzles by reformulating clues into Google queries processed via the Google API. Candidate answers are extracted from query results. Integrating GCV with SWSE, we quantitatively show how semantic search improves upon keyword search. Mimicking the human brain's ability to create and traverse relationships between facts, our techniques enable Web applications to ‘think’ using semantic reasoning, opening the door to intelligent search applications that utilize the Semantic Web. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

9.
Most Web pages contain location information, which are usually neglected by traditional search engines. Queries combining location and textual terms are called as spatial textual Web queries. Based on the fact that traditional search engines pay little attention in the location information in Web pages, in this paper we study a framework to utilize location information for Web search. The proposed framework consists of an offline stage to extract focused locations for crawled Web pages, as well as an online ranking stage to perform location-aware ranking for search results. The focused locations of a Web page refer to the most appropriate locations associated with the Web page. In the offline stage, we extract the focused locations and keywords from Web pages and map each keyword with specific focused locations, which forms a set of <keyword, location> pairs. In the second online query processing stage, we extract keywords from the query, and computer the ranking scores based on location relevance and the location-constrained scores for each querying keyword. The experiments on various real datasets crawled from nj.gov, BBC and New York Time show that the performance of our algorithm on focused location extraction is superior to previous methods and the proposed ranking algorithm has the best performance w.r.t different spatial textual queries.  相似文献   

10.
模糊聚类在Web信息检索中的应用研究   总被引:4,自引:0,他引:4  
何鹏  徐立臻  庄晓青 《计算机工程》2002,28(10):241-242,260
如何从大量信息中快速、有效地进行Web信息检索已经成为一项重要的研究课题,但是传统的搜索引擎所提供的搜索结果仅仅按照与查询的相关性从高到低排成一个有序列表,不具备层次性,用户使用起来并不方便,该文基于Web资源中词语的不分明性即模糊性,提出采用模糊聚类的方法自动组织搜索引擎的结果来解决这个问题。  相似文献   

11.
This paper describes and evaluates a unified approach to phrasal query suggestions in the context of a high-precision search engine. The search engine performs ranked extended-Boolean searches with the proximity operator near being the default operation. Suggestions are offered to the searcher when the length of the result list falls outside predefined bounds. If the list is too long, the engine specializes the query through the use of super phrases; if the list is too short, the engine generalizes the query through the use of proximal subphrases.We describe methods for generating both types of suggestions and present algorithms for ranking the suggestions. Specifically, we present the problem of counting proximal subphrases for specialization and the problem of counting unordered super phrases for generalization.The uptake of our approach was evaluated by analyzing search log data from before and after the suggestion feature was added to a commercial version of the search engine. We looked at approximately 1.5 million queries and found that, after they were added, suggestions represented nearly 30% of the total queries. Efficacy was evaluated through a controlled study of 24 participants performing nine searches using three different search engines. We found that the engine with phrasal query suggestions had better high-precision recall than both the same search engine without suggestions and a search engine with a similar interface but using an Okapi BM25 ranking algorithm.  相似文献   

12.
Given a user keyword query, current Web search engines return a list of individual Web pages ranked by their "goodness" with respect to the query. Thus, the basic unit for search and retrieval is an individual page, even though information on a topic is often spread across multiple pages. This degrades the quality of search results, especially for long or uncorrelated (multitopic) queries (in which individual keywords rarely occur together in the same document), where a single page is unlikely to satisfy the user's information need. We propose a technique that, given a keyword query, on the fly generates new pages, called composed pages, which contain all query keywords. The composed pages are generated by extracting and stitching together relevant pieces from hyperlinked Web pages and retaining links to the original Web pages. To rank the composed pages, we consider both the hyperlink structure of the original pages and the associations between the keywords within each page. Furthermore, we present and experimentally evaluate heuristic algorithms to efficiently generate the top composed pages. The quality of our method is compared to current approaches by using user surveys. Finally, we also show how our techniques can be used to perform query-specific summarization of Web pages.  相似文献   

13.
Search engines result pages (SERPs) for a specific query are constructed according to several mechanisms. One of them consists in ranking Web pages regarding their importance, regardless of their semantic. Indeed, relevance to a query is not enough to provide high quality results, and popularity is used to arbitrate between equally relevant Web pages. The most well-known algorithm that ranks Web pages according to their popularity is the PageRank.The term Webspam was coined to denotes Web pages created with the only purpose of fooling ranking algorithms such as the PageRank. Indeed, the goal of Webspam is to promote a target page by increasing its rank. It is an important issue for Web search engines to spot and discard Webspam to provide their users with a nonbiased list of results. Webspam techniques are evolving constantly to remain efficient but most of the time they still consist in creating a specific linking architecture around the target page to increase its rank.In this paper we propose to study the effects of node aggregation on the well-known ranking algorithm of Google (the PageRank) in the presence of Webspam. Our node aggregation methods have the purpose to construct clusters of nodes that are considered as a sole node in the PageRank computation. Since the Web graph is way to big to apply classic clustering techniques, we present four lightweight aggregation techniques suitable for its size. Experimental results on the WEBSPAM-UK2007 dataset show the interest of the approach, which is moreover confirmed by statistical evidence.  相似文献   

14.
A (page or web) snippet is a document excerpt allowing a user to understand if a document is indeed relevant without accessing it. This paper proposes an effective snippet generation method. A statistical query expansion approach with pseudo-relevance feedback and text summarization techniques are applied to salient sentence extraction for good quality snippets. In the experimental results, the proposed method showed much better performance than other methods including those of commercial Web search engines such as Google and Naver.  相似文献   

15.
Most Web search engines use the content of the Web documents and their link structures to assess the relevance of the document to the user’s query. With the growth of the information available on the web, it becomes difficult for such Web search engines to satisfy the user information need expressed by few keywords. First, personalized information retrieval is a promising way to resolve this problem by modeling the user profile by his general interests and then integrating it in a personalized document ranking model. In this paper, we present a personalized search approach that involves a graph-based representation of the user profile. The user profile refers to the user interest in a specific search session defined as a sequence of related queries. It is built by means of score propagation that allows activating a set of semantically related concepts of reference ontology, namely the ODP. The user profile is maintained across related search activities using a graph-based merging strategy. For the purpose of detecting related search activities, we define a session boundary recognition mechanism based on the Kendall rank correlation measure that tracks changes in the dominant concepts held by the user profile relatively to a new submitted query. Personalization is performed by re-ranking the search results of related queries using the user profile. Our experimental evaluation is carried out using the HARD 2003 TREC collection and showed that our session boundary recognition mechanism based on the Kendall measure provides a significant precision comparatively to other non-ranking based measures like the cosine and the WebJaccard similarity measures. Moreover, results proved that the graph-based search personalization is effective for improving the search accuracy.  相似文献   

16.
基于P2P的个性化Web信息检索   总被引:2,自引:0,他引:2  
为了克服Web搜索引擎在可扩展性、协作性和个性化等方面存在的不足,提出了一种基于Peer to Peer 的全分布、协作式、自组织的个性化Web信息检索,定义了以查询主题为中心进行主题聚类、数据组织和查询路由的用户协作共享策略,设计了协作生成用户兴趣列表向量、对相似语义查询进行主题聚类和更新、基于查询集建立倒排索引以及基于查询主题进行语义路由等算法和机制,以提供人性化、协作式、个性化的搜索。模拟实验表明,原型系统可以加快查询速度,减轻网络负荷,提高搜索的准确率。  相似文献   

17.
Positional ranking functions, widely used in web search engines and related search systems, improve result quality by exploiting the positions of the query terms within documents. However, it is well known that positional indexes demand large amounts of extra space, typically about three times the space of a basic nonpositional index. Textual data, on the other hand, is needed to produce text snippets. In this paper, we study time–space trade-offs for search engines with positional ranking functions and text snippet generation. We consider both index-based and non-index based alternatives for positional data. We aim to answer the question of whether positional data should be indexed, and how.We show that there is a wide range of practical time–space trade-offs. Moreover, we show that using about 1.30 times the space of positional data, we can store everything needed for efficient query processing, with a minor increase in query time. This yields considerable space savings and outperforms, both in space and time, recent alternatives from literature. We also propose several efficient compressed text representations for snippet generation, which are able to use about half of the space of current state-of-the-art alternatives with little impact in query processing time.  相似文献   

18.
一种基于用户标记的搜索结果排序算法   总被引:1,自引:0,他引:1  
随着计算机网络的快速发展,网络上的信息量也日益纷繁复杂.如何准确、快速地帮助人们从海量网络数据中获取所需信息,这是目前搜索引擎首要解决的问题,为此,各种搜索排序算法应运而生.但是目前,网页信息的表达形式都十分简单,用户描述查询的形式更是十分简单,这就造成了在判断网页内容与用户查询相关性时十分困难.首先对现有的搜索引擎排序算法进行了分类总结,分析它们的优缺点.然后提出了一种基于用户反馈的语义标记的新方法,最后采用多种评估方法与Google搜索结果进行对比分析.实验结果表明,利用该方法所得到的排序结果比Google的排序结果更接近用户需求.  相似文献   

19.
The exponential growth of information on the Web has introduced new challenges for building effective search engines. A major problem of web search is that search queries are usually short and ambiguous, and thus are insufficient for specifying the precise user needs. To alleviate this problem, some search engines suggest terms that are semantically related to the submitted queries so that users can choose from the suggestions the ones that reflect their information needs. In this paper, we introduce an effective approach that captures the user's conceptual preferences in order to provide personalized query suggestions. We achieve this goal with two new strategies. First, we develop online techniques that extract concepts from the web-snippets of the search result returned from a query and use the concepts to identify related queries for that query. Second, we propose a new two-phase personalized agglomerative clustering algorithm that is able to generate personalized query clusters. To the best of the authors' knowledge, no previous work has addressed personalization for query suggestions. To evaluate the effectiveness of our technique, a Google middleware was developed for collecting clickthrough data to conduct experimental evaluation. Experimental results show that our approach has better precision and recall than the existing query clustering methods.  相似文献   

20.
Systems that produce ranked lists of results are abundant. For instance, Web search engines return ranked lists of Web pages. There has been work on distance measure for list permutations, like Kendall tau and Spearman's footrule, as well as extensions to handle top-k lists, which are more common in practice. In addition to ranking whole objects (e.g., Web pages), there is an increasing number of systems that provide keyword search on XML or other semistructured data, and produce ranked lists of XML sub-trees. Unfortunately, previous distance measures are not suitable for ranked lists of sub-trees since they do not account for the possible overlap between the returned sub-trees. That is, two sub-trees differing by a single node would be considered separate objects. In this paper, we present the first distance measures for ranked lists of sub-trees, and show under what conditions these measures are metrics. Furthermore, we present algorithms to efficiently compute these distance measures. Finally, we evaluate and compare the proposed measures on real data using three popular XML keyword proximity search systems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号