共查询到20条相似文献,搜索用时 93 毫秒
1.
搜索引擎返回的信息太多且不能根据用户的兴趣提供检索结果,使得用户使用搜索引擎难以用简便的方式找到感兴趣的文档。个性化推荐是一种旨在减轻用户在信息检索方面负担的有效方法。文中把内容过滤技术和文档聚类技术相结合,实现了一个基于搜索结果的个性化推荐系统,以聚类的方法自动组织搜索结果,主动推荐用户感兴趣的文档。通过建立用户概率兴趣模型,对搜索结果STC聚类的基础上进行内容过滤。实验表明,概率模型比矢量空间模型更好地表达了用户的兴趣和变化。 相似文献
2.
基于搜索结果的个性化推荐系统研究 总被引:1,自引:0,他引:1
卫琳 《计算机技术与发展》2007,17(9):65-67,70
搜索引擎返回的信息太多且不能根据用户的兴趣提供检索结果,使得用户使用搜索引擎难以用简便的方式找到感兴趣的文档。个性化推荐是一种旨在减轻用户在信息检索方面负担的有效方法。文中把内容过滤技术和文档聚类技术相结合,实现了一个基于搜索结果的个性化推荐系统,以聚类的方法自动组织搜索结果,主动推荐用户感兴趣的文档。通过建立用户概率兴趣模型,对搜索结果跚℃聚类的基础上进行内容过滤。实验表明,概率模型比矢量空间模型更好地表达了用户的兴趣和变化。 相似文献
3.
PCCS部分聚类分类:一种快速的Web文档聚类方法 总被引:16,自引:1,他引:15
PCCS是为了帮助Web用户从搜索引擎所返回的大量文档片中筛选出自已所需要的文档,而使用的一种对Web文档进行快速聚类的部分聚类分法,首先对一部分文档进行聚类,然后根据聚类结果形成类模型对其余的文档进行分类,采用交互式的一次改进一个聚类摘选的聚类方法快速地创建一个聚类摘选集,将其余的文档使用Naive-Bayes分类器进行划分,为了提高聚类与分类的效率,提出了一种混合特征选取方法以减少文档表示的维数,重新计算文档中各特征的熵,从中选取具有最大熵值的前若干个特征,或者基于持久分类模型中的特征集来进行特征选取,实验证明,部分聚类方法能够快速,准确地根据文档主题内容组织Web文档,使用户在更高的术题层次上来查看搜索引擎返回的结果,从以主题相似的文档所形成的集簇中选取相关文档。 相似文献
4.
针对目前搜索引擎返回候选信息过多从而使用户不能准确查找与主题有关结果的问题,提出了基于超链接信息的搜索引擎检索结果聚类方法,通过对网页的超链接锚文档和网页文档内容挖掘,最终将网页聚成不同的子类别。这种方法在依据网页内容进行聚类的同时,充分利用了Web结构和超链接信息,比传统的结构挖掘方法更能体现网站文档的内容特点,从而提高了聚类的准确性。 相似文献
5.
6.
Web检索结果快速聚类方法的研究与实现 总被引:2,自引:0,他引:2
为了帮助Web用户从搜索引擎所返回的大量文档片断中筛选出自己所需要的文档,在对聚类过程研究分析的基础上给出了一种Web检索结果快速聚类方法。它通过分析聚类过程,从建立索引模型、相似性的计算到聚类结果的形成等环节,都做了分析和简化,并利用检索结果的标题、Url以及文档片断3部分所含信息计算返回结果之间的相似度,将首先返回的部分检索结果利用无向图映射法进行部分聚类后,将其余返回结果分配到与之最相近的集簇中最终形成聚类结果。该方法实现简单。实验证明该方法响应速度快,聚类相关性较高,空间占用少。 相似文献
7.
目前搜索引擎返回的结果往往比较多,而且各类文档混合在一起,没有针对性,使用者仍然需要花费大量时间来寻找自己感兴趣的文档。提出了一种对搜索结果动态聚类算法,利用用户的兴趣特点,从搜索结果的文档中抽取摘要,利用这种摘要随着用户的浏览进程进行动态聚类,将这些文档聚成不同类别。用户只需要找出自己感兴趣类别,便可以得到足够多感兴趣的文档。实验证明,这种方法是有效的,并具有抗噪声等良好性能。 相似文献
8.
目前搜索引擎返回的信息太多且难以根据用户的兴趣提供检索结果,而个性化推荐是一种旨在减轻用户在信息检索方面负担的有效方法.文中把内容过滤技术和文档聚类技术相结合,以改进的STC聚类方法组织搜索结果,主动推荐用户感兴趣的文档并将其中的Top-N对象预取到本地. WWW缓存中的Web文档代表了用户当前的兴趣,通过建立用户概率兴趣模型,在搜索结果STC聚类的基础上进行内容过滤.实验表明,基于搜索结果的Web预取模型具有较好的时间性能和较高的查准率. 相似文献
9.
10.
Internet是一个巨大的,分步广泛的,动态性强的全球信息服务中心,人们想在它上面找到想要的相关信息是很困难的,一般用户通过给搜索引擎提供简短的关键词来检索信息,但是通过搜索引擎返回的相关结果太多,这使得处理相关结果太耗时,本文提出了一种语义虚拟文档(SVD)来表示web文档,在此基础上实现了凝聚层次聚类算法,以自动聚类内容相似的web文档。结果:一方面使网络用户增强了相关结果的判断处理,同时使用户快速、高效的从Internet上发现想要的信息,另一方面返回的结果在知识表示上增强了web内容挖掘。 相似文献
11.
传统搜索引擎是基于关键字的检索,然而文档的关键字未必和文档有关,而相关的文档也未必显式地包含此关键字。基于语义Web的搜索引擎利用本体技术,可以很好地对关键字进行语义描述。当收到用户提交的搜索请求时,先在已经建立好的本体库的基础上对该请求进行概念推理,然后将推理结果提交给传统的搜索引擎,最终将搜索结果返回给用户。相对于传统的搜索引擎,基于语义Web的搜索引擎有效地提高了搜索的查全率和查准率。 相似文献
12.
Domain-specific Web search with keyword spices 总被引:4,自引:0,他引:4
Domain-specific Web search engines are effective tools for reducing the difficulty experienced when acquiring information from the Web. Existing methods for building domain-specific Web search engines require human expertise or specific facilities. However, we can build a domain-specific search engine simply by adding domain-specific keywords, called "keyword spices," to the user's input query and forwarding it to a general-purpose Web search engine. Keyword spices can be effectively discovered from Web documents using machine learning technologies. The paper describes domain-specific Web search engines that use keyword spices for locating recipes, restaurants, and used cars. 相似文献
13.
Web搜索引擎框架研究 总被引:43,自引:1,他引:42
Web搜索引擎是Internet上非常有用的信息检索工具,但是由于目前搜索引擎检索出的信息量庞大,且一个特定的搜索引擎主要包含某一特定领域的信息,这使得用户很难从某一个搜索引擎获得准确的导航信息。文中提出一个新的Web搜索引擎框架GSE,并提出了一个适合于Web信息获取与处理的语言WERPL。通过WIRPL可以将多个Web搜索引擎结合起来,为用户提供一个一致、高效、准确的Web搜索引擎。 相似文献
14.
Chakrabarti S. Dom B.E. Kumar S.R. Raghavan P. Rajagopalan S. Tomkins A. Gibson D. Kleinberg J. 《Computer》1999,32(8):60-67
The Web is a hypertext body of approximately 300 million pages that continues to grow at roughly a million pages per day. Page variation is more prodigious than the data's raw scale: taken as a whole, the set of Web pages lacks a unifying structure and shows far more authoring style and content variation than that seen in traditional text document collections. This level of complexity makes an “off-the-shelf” database management and information retrieval solution impossible. To date, index based search engines for the Web have been the primary tool by which users search for information. Such engines can build giant indices that let you quickly retrieve the set of all Web pages containing a given word or string. Experienced users can make effective use of such engines for tasks that can be solved by searching for tightly constrained key words and phrases. These search engines are, however, unsuited for a wide range of equally important tasks. In particular, a topic of any breadth will typically contain several thousand or million relevant Web pages. How then, from this sea of pages, should a search engine select the correct ones-those of most value to the user? Clever is a search engine that analyzes hyperlinks to uncover two types of pages: authorities, which provide the best source of information on a given topic; and hubs, which provide collections of links to authorities. We outline the thinking that went into Clever's design, report briefly on a study that compared Clever's performance to that of Yahoo and AltaVista, and examine how our system is being extended and updated 相似文献
15.
It's widely recognized that all Web search engines today are almost identical in presentation layout and behavior. In fact, the same presentation approach has been applied to depicting search engine results pages (SERPs) since the first Web search engine launched in 1993. In this Visualization Viewpoints article, I propose to add a visualization feature to Web search engines and suggest that the new addition can improve search engines' performance and capabilities, which in turn lead to better Web search technology. 相似文献
16.
Yufei Li Yuan Wang Xiaotao Huang 《Knowledge and Data Engineering, IEEE Transactions on》2007,19(2):273-282
With the development of the Web, an information "Big Bang" has taken place on the Internet. Search engines have become one of the most helpful tools for obtaining useful information from the Internet. However, instead of caring about the semantics of information, the machine on the current Web cares about the location and display of information only. Because of this shortcoming of the current Web, the search results by even the most popular search engines cannot produce satisfactory results. The development of the next generation Web, semantic Web, will turn the situation around completely. This paper proposes a prototype relation-based search engine, "OntoLook," which has been implemented in a virtual semantic Web environment in our lab. We also present its system architecture and analyze the key algorithm 相似文献
17.
Eitan Frachtenberg 《World Wide Web》2009,12(4):441-460
Semantic Web search is a new application of recent advances in information retrieval (IR), natural language processing, artificial
intelligence, and other fields. The Powerset group in Microsoft develops a semantic search engine that aims to answer queries
not only by matching keywords, but by actually matching meaning in queries to meaning in Web documents. Compared to typical
keyword search, semantic search can pose additional engineering challenges for the back-end and infrastructure designs. Of
these, the main challenge addressed in this paper is how to lower query latencies to acceptable, interactive levels. Index-based
semantic search requires more data processing, such as numerous synonyms, hypernyms, multiple linguistic readings, and other
semantic information, both on queries and in the index. In addition, some of the algorithms can be super-linear, such as matching
co-references across a document. Consequently, many semantic queries can run significantly slower than the same keyword query.
Users, however, have grown to expect Web search engines to provide near-instantaneous results, and a slow search engine could
be deemed unusable even if it provides highly relevant results. It is therefore imperative for any search engine to meet its
users’ interactivity expectations, or risk losing them. Our approach to tackle this challenge is to exploit data parallelism
in slow search queries to reduce their latency in multi-core systems. Although all search engines are designed to exploit
parallelism, at the single-node level this usually translates to throughput-oriented task parallelism. This paper focuses
on the engineering of two latency-oriented approaches (coarse- and fine-grained) and compares them to the task-parallel approach.
We use Powerset’s deployed search engine to evaluate the various factors that affect parallel performance: workload, overhead,
load balancing, and resource contention. We also discuss heuristics to selectively control the degree of parallelism and consequent
overhead on a query-by-query level. Our experimental results show that using fine-grained parallelism with these dynamic heuristics
can significantly reduce query latencies compared to fixed, coarse-granularity parallelization schemes. Although these results
were obtained on, and optimized for, Powerset’s semantic search, they can be readily generalized to a wide class of inverted-index
search engines. 相似文献
18.
Analysis of data from a major metasearch engine reveals that sponsored-link click-through rates appear lower than previously reported. Combining sponsored and nonsponsored links in a single listing, while providing some benefits to users, does not appear to increase clicks on sponsored listings. In a competitive market, rivals continually strive to improve their information-retrieval capabilities and increase their financial returns. One innovation is sponsored search, an "economics meets search" model in which content providers pay search engines for user traffic going from the search engine to their Web sites. Sponsored search has proven to be a successful business model for Web search engines, advertisers, and online vendors, as well as an effective way to deliver content to searchers. The "impact of sponsored search" sidebar describes some of the model's notable benefits. 相似文献
19.
20.
基于移动爬虫的专用Web信息收集系统的设计 总被引:3,自引:0,他引:3
搜索引擎已经成为网上导航的重要工具。为了能够提供强大的搜索能力,搜索引擎对网上可访问文档维持着详尽的索引。创建和维护索引的任务由网络爬虫完成,网络爬虫代表搜索引擎递归地遍历和下载Web页面。Web页面在下载之后,被搜索引擎分析、建索引,然后提供检索服务。文章介绍了一种更加有效的建立Web索引的方法,该方法是基于移动爬虫(MobileCrawler)的。在此提出的爬虫首先被传送到数据所在的站点,在那里任何不需要的数据在传回搜索引擎之前在当地被过滤。这个方法尤其适用于实施所谓的“智能”爬行算法,这些算法根据已访问过的Web页面的内容来决定一条有效的爬行路径。移动爬虫是移动计算和专业搜索引擎两大技术趋势的结合,能够从技术上很好地解决现在通用搜索引擎所面临的问题。 相似文献