首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
卫琳 《微机发展》2007,17(9):65-67
搜索引擎返回的信息太多且不能根据用户的兴趣提供检索结果,使得用户使用搜索引擎难以用简便的方式找到感兴趣的文档。个性化推荐是一种旨在减轻用户在信息检索方面负担的有效方法。文中把内容过滤技术和文档聚类技术相结合,实现了一个基于搜索结果的个性化推荐系统,以聚类的方法自动组织搜索结果,主动推荐用户感兴趣的文档。通过建立用户概率兴趣模型,对搜索结果STC聚类的基础上进行内容过滤。实验表明,概率模型比矢量空间模型更好地表达了用户的兴趣和变化。  相似文献   

2.
基于搜索结果的个性化推荐系统研究   总被引:1,自引:0,他引:1  
搜索引擎返回的信息太多且不能根据用户的兴趣提供检索结果,使得用户使用搜索引擎难以用简便的方式找到感兴趣的文档。个性化推荐是一种旨在减轻用户在信息检索方面负担的有效方法。文中把内容过滤技术和文档聚类技术相结合,实现了一个基于搜索结果的个性化推荐系统,以聚类的方法自动组织搜索结果,主动推荐用户感兴趣的文档。通过建立用户概率兴趣模型,对搜索结果跚℃聚类的基础上进行内容过滤。实验表明,概率模型比矢量空间模型更好地表达了用户的兴趣和变化。  相似文献   

3.
PCCS部分聚类分类:一种快速的Web文档聚类方法   总被引:16,自引:1,他引:15  
PCCS是为了帮助Web用户从搜索引擎所返回的大量文档片中筛选出自已所需要的文档,而使用的一种对Web文档进行快速聚类的部分聚类分法,首先对一部分文档进行聚类,然后根据聚类结果形成类模型对其余的文档进行分类,采用交互式的一次改进一个聚类摘选的聚类方法快速地创建一个聚类摘选集,将其余的文档使用Naive-Bayes分类器进行划分,为了提高聚类与分类的效率,提出了一种混合特征选取方法以减少文档表示的维数,重新计算文档中各特征的熵,从中选取具有最大熵值的前若干个特征,或者基于持久分类模型中的特征集来进行特征选取,实验证明,部分聚类方法能够快速,准确地根据文档主题内容组织Web文档,使用户在更高的术题层次上来查看搜索引擎返回的结果,从以主题相似的文档所形成的集簇中选取相关文档。  相似文献   

4.
夏斌  徐彬 《电脑开发与应用》2007,20(5):16-17,20
针对目前搜索引擎返回候选信息过多从而使用户不能准确查找与主题有关结果的问题,提出了基于超链接信息的搜索引擎检索结果聚类方法,通过对网页的超链接锚文档和网页文档内容挖掘,最终将网页聚成不同的子类别。这种方法在依据网页内容进行聚类的同时,充分利用了Web结构和超链接信息,比传统的结构挖掘方法更能体现网站文档的内容特点,从而提高了聚类的准确性。  相似文献   

5.
搜索引擎往往返回给用户一个包含大量文档片段的列表,用户从中筛选出自己所需要的文档。文中提出一种预取代理的方法:对搜索引擎返回的结果进行聚类分析,使得用户以主题的方式来查看结果,满足用户搜索请求的个性化服务;同时对聚类进行评价,推测出用户可能感兴趣的文档,并将它们预取过来,从而减少网络延迟。  相似文献   

6.
Web检索结果快速聚类方法的研究与实现   总被引:2,自引:0,他引:2  
为了帮助Web用户从搜索引擎所返回的大量文档片断中筛选出自己所需要的文档,在对聚类过程研究分析的基础上给出了一种Web检索结果快速聚类方法。它通过分析聚类过程,从建立索引模型、相似性的计算到聚类结果的形成等环节,都做了分析和简化,并利用检索结果的标题、Url以及文档片断3部分所含信息计算返回结果之间的相似度,将首先返回的部分检索结果利用无向图映射法进行部分聚类后,将其余返回结果分配到与之最相近的集簇中最终形成聚类结果。该方法实现简单。实验证明该方法响应速度快,聚类相关性较高,空间占用少。  相似文献   

7.
基于用户兴趣的搜索结果动态聚类算法   总被引:1,自引:0,他引:1       下载免费PDF全文
目前搜索引擎返回的结果往往比较多,而且各类文档混合在一起,没有针对性,使用者仍然需要花费大量时间来寻找自己感兴趣的文档。提出了一种对搜索结果动态聚类算法,利用用户的兴趣特点,从搜索结果的文档中抽取摘要,利用这种摘要随着用户的浏览进程进行动态聚类,将这些文档聚成不同类别。用户只需要找出自己感兴趣类别,便可以得到足够多感兴趣的文档。实验证明,这种方法是有效的,并具有抗噪声等良好性能。  相似文献   

8.
目前搜索引擎返回的信息太多且难以根据用户的兴趣提供检索结果,而个性化推荐是一种旨在减轻用户在信息检索方面负担的有效方法.文中把内容过滤技术和文档聚类技术相结合,以改进的STC聚类方法组织搜索结果,主动推荐用户感兴趣的文档并将其中的Top-N对象预取到本地. WWW缓存中的Web文档代表了用户当前的兴趣,通过建立用户概率兴趣模型,在搜索结果STC聚类的基础上进行内容过滤.实验表明,基于搜索结果的Web预取模型具有较好的时间性能和较高的查准率.  相似文献   

9.
基于聚类和用户兴趣分析结合的个性化元搜索   总被引:1,自引:1,他引:0  
随着Web信息的快速增长,搜索引擎已成为用户信息检索的主要工具。元搜索引擎综合了多个搜索引擎的搜索结果,提高了搜索的覆盖率,但是返回的结果往往数目庞大,并且很多结果与用户查询并不相关,这直接影响了用户检索的质量并增加了用户检索的代价。本文提出一种基于聚类的个性化元搜索引擎模型,系统通过对用户建立兴趣模型,对此模型进行聚类形成不同用户群,并对检索到的结果进行聚类处理,与用户模型聚类相结合返回给用户个性化的搜索结果。  相似文献   

10.
常浩  陈莉 《微计算机信息》2006,22(24):302-304
Internet是一个巨大的,分步广泛的,动态性强的全球信息服务中心,人们想在它上面找到想要的相关信息是很困难的,一般用户通过给搜索引擎提供简短的关键词来检索信息,但是通过搜索引擎返回的相关结果太多,这使得处理相关结果太耗时,本文提出了一种语义虚拟文档(SVD)来表示web文档,在此基础上实现了凝聚层次聚类算法,以自动聚类内容相似的web文档。结果:一方面使网络用户增强了相关结果的判断处理,同时使用户快速、高效的从Internet上发现想要的信息,另一方面返回的结果在知识表示上增强了web内容挖掘。  相似文献   

11.
传统搜索引擎是基于关键字的检索,然而文档的关键字未必和文档有关,而相关的文档也未必显式地包含此关键字。基于语义Web的搜索引擎利用本体技术,可以很好地对关键字进行语义描述。当收到用户提交的搜索请求时,先在已经建立好的本体库的基础上对该请求进行概念推理,然后将推理结果提交给传统的搜索引擎,最终将搜索结果返回给用户。相对于传统的搜索引擎,基于语义Web的搜索引擎有效地提高了搜索的查全率和查准率。  相似文献   

12.
Domain-specific Web search with keyword spices   总被引:4,自引:0,他引:4  
Domain-specific Web search engines are effective tools for reducing the difficulty experienced when acquiring information from the Web. Existing methods for building domain-specific Web search engines require human expertise or specific facilities. However, we can build a domain-specific search engine simply by adding domain-specific keywords, called "keyword spices," to the user's input query and forwarding it to a general-purpose Web search engine. Keyword spices can be effectively discovered from Web documents using machine learning technologies. The paper describes domain-specific Web search engines that use keyword spices for locating recipes, restaurants, and used cars.  相似文献   

13.
Web搜索引擎框架研究   总被引:43,自引:1,他引:42  
Web搜索引擎是Internet上非常有用的信息检索工具,但是由于目前搜索引擎检索出的信息量庞大,且一个特定的搜索引擎主要包含某一特定领域的信息,这使得用户很难从某一个搜索引擎获得准确的导航信息。文中提出一个新的Web搜索引擎框架GSE,并提出了一个适合于Web信息获取与处理的语言WERPL。通过WIRPL可以将多个Web搜索引擎结合起来,为用户提供一个一致、高效、准确的Web搜索引擎。  相似文献   

14.
The Web is a hypertext body of approximately 300 million pages that continues to grow at roughly a million pages per day. Page variation is more prodigious than the data's raw scale: taken as a whole, the set of Web pages lacks a unifying structure and shows far more authoring style and content variation than that seen in traditional text document collections. This level of complexity makes an “off-the-shelf” database management and information retrieval solution impossible. To date, index based search engines for the Web have been the primary tool by which users search for information. Such engines can build giant indices that let you quickly retrieve the set of all Web pages containing a given word or string. Experienced users can make effective use of such engines for tasks that can be solved by searching for tightly constrained key words and phrases. These search engines are, however, unsuited for a wide range of equally important tasks. In particular, a topic of any breadth will typically contain several thousand or million relevant Web pages. How then, from this sea of pages, should a search engine select the correct ones-those of most value to the user? Clever is a search engine that analyzes hyperlinks to uncover two types of pages: authorities, which provide the best source of information on a given topic; and hubs, which provide collections of links to authorities. We outline the thinking that went into Clever's design, report briefly on a study that compared Clever's performance to that of Yahoo and AltaVista, and examine how our system is being extended and updated  相似文献   

15.
It's widely recognized that all Web search engines today are almost identical in presentation layout and behavior. In fact, the same presentation approach has been applied to depicting search engine results pages (SERPs) since the first Web search engine launched in 1993. In this Visualization Viewpoints article, I propose to add a visualization feature to Web search engines and suggest that the new addition can improve search engines' performance and capabilities, which in turn lead to better Web search technology.  相似文献   

16.
With the development of the Web, an information "Big Bang" has taken place on the Internet. Search engines have become one of the most helpful tools for obtaining useful information from the Internet. However, instead of caring about the semantics of information, the machine on the current Web cares about the location and display of information only. Because of this shortcoming of the current Web, the search results by even the most popular search engines cannot produce satisfactory results. The development of the next generation Web, semantic Web, will turn the situation around completely. This paper proposes a prototype relation-based search engine, "OntoLook," which has been implemented in a virtual semantic Web environment in our lab. We also present its system architecture and analyze the key algorithm  相似文献   

17.
Semantic Web search is a new application of recent advances in information retrieval (IR), natural language processing, artificial intelligence, and other fields. The Powerset group in Microsoft develops a semantic search engine that aims to answer queries not only by matching keywords, but by actually matching meaning in queries to meaning in Web documents. Compared to typical keyword search, semantic search can pose additional engineering challenges for the back-end and infrastructure designs. Of these, the main challenge addressed in this paper is how to lower query latencies to acceptable, interactive levels. Index-based semantic search requires more data processing, such as numerous synonyms, hypernyms, multiple linguistic readings, and other semantic information, both on queries and in the index. In addition, some of the algorithms can be super-linear, such as matching co-references across a document. Consequently, many semantic queries can run significantly slower than the same keyword query. Users, however, have grown to expect Web search engines to provide near-instantaneous results, and a slow search engine could be deemed unusable even if it provides highly relevant results. It is therefore imperative for any search engine to meet its users’ interactivity expectations, or risk losing them. Our approach to tackle this challenge is to exploit data parallelism in slow search queries to reduce their latency in multi-core systems. Although all search engines are designed to exploit parallelism, at the single-node level this usually translates to throughput-oriented task parallelism. This paper focuses on the engineering of two latency-oriented approaches (coarse- and fine-grained) and compares them to the task-parallel approach. We use Powerset’s deployed search engine to evaluate the various factors that affect parallel performance: workload, overhead, load balancing, and resource contention. We also discuss heuristics to selectively control the degree of parallelism and consequent overhead on a query-by-query level. Our experimental results show that using fine-grained parallelism with these dynamic heuristics can significantly reduce query latencies compared to fixed, coarse-granularity parallelization schemes. Although these results were obtained on, and optimized for, Powerset’s semantic search, they can be readily generalized to a wide class of inverted-index search engines.  相似文献   

18.
Jansen  B.J. Spink  A. 《Computer》2007,40(8):52-57
Analysis of data from a major metasearch engine reveals that sponsored-link click-through rates appear lower than previously reported. Combining sponsored and nonsponsored links in a single listing, while providing some benefits to users, does not appear to increase clicks on sponsored listings. In a competitive market, rivals continually strive to improve their information-retrieval capabilities and increase their financial returns. One innovation is sponsored search, an "economics meets search" model in which content providers pay search engines for user traffic going from the search engine to their Web sites. Sponsored search has proven to be a successful business model for Web search engines, advertisers, and online vendors, as well as an effective way to deliver content to searchers. The "impact of sponsored search" sidebar describes some of the model's notable benefits.  相似文献   

19.
搜索引擎中的聚类浏览技术   总被引:1,自引:0,他引:1  
搜索引擎大多以文档列表的形式将搜索结果显示给用户,随着Web文档数量的剧增,使得用户查找相关信息变得越来越困难,一种解决方法是对搜索结果进行聚类提高其可浏览性。搜索引擎的聚类浏览技术能使用户在更高的主题层次上查看搜索结果,方便地找到感兴趣的信息。本文介绍了搜索引擎的聚类浏览技术对聚类算法的基本要求及其分类方法,研究分析了主要聚类算法及其改进方法的特点,讨论了对聚类质量的评价,最后指出了聚类浏览技术的发展趋势。  相似文献   

20.
基于移动爬虫的专用Web信息收集系统的设计   总被引:3,自引:0,他引:3  
搜索引擎已经成为网上导航的重要工具。为了能够提供强大的搜索能力,搜索引擎对网上可访问文档维持着详尽的索引。创建和维护索引的任务由网络爬虫完成,网络爬虫代表搜索引擎递归地遍历和下载Web页面。Web页面在下载之后,被搜索引擎分析、建索引,然后提供检索服务。文章介绍了一种更加有效的建立Web索引的方法,该方法是基于移动爬虫(MobileCrawler)的。在此提出的爬虫首先被传送到数据所在的站点,在那里任何不需要的数据在传回搜索引擎之前在当地被过滤。这个方法尤其适用于实施所谓的“智能”爬行算法,这些算法根据已访问过的Web页面的内容来决定一条有效的爬行路径。移动爬虫是移动计算和专业搜索引擎两大技术趋势的结合,能够从技术上很好地解决现在通用搜索引擎所面临的问题。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号