首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
陈海燕 《计算机科学》2015,42(1):261-267
词汇语义相似度的计算在网页浏览和查询推荐等网络相关工作中起着重要的作用.传统的基于分类的方法不能处理持续出现的新词.由于网络数据中隐藏着大量的噪音和冗余,鲁棒性和准确性仍然是一个挑战,因此提出了一种基于搜索引擎的词汇语义相似度计算方法.语义片段和检索结果的页数被用来去除词汇语义相似度计算过程中的噪音和冗余.此外,还提出了一种方法来整合查询结果页数、语义片段和显示的搜索结果的数量,该方法不需要任何先验知识与本体.实验结果显示,所提出的方法在Rubenstein-Goodenough测试集的相关系数为0.851,优于现有的基于网络的词汇语义相似度计算方法,同时在搜索引擎的查询扩展任务中具有较为良好的应用效果.  相似文献   

2.
Queries to Web search engines are usually short and ambiguous, which provides insufficient information needs of users for effectively retrieving relevant Web pages. To address this problem, query suggestion is implemented by most search engines. However, existing methods do not leverage the contradiction between accuracy and computation complexity appropriately (e.g. Google's ‘Search related to’ and Yahoo's ‘Also Try’). In this paper, the recommended words are extracted from the search results of the query, which guarantees the real time of query suggestion properly. A scheme for ranking words based on semantic similarity presents a list of words as the query suggestion results, which ensures the accuracy of query suggestion. Moreover, the experimental results show that the proposed method significantly improves the quality of query suggestion over some popular Web search engines (e.g. Google and Yahoo). Finally, an offline experiment that compares the accuracy of snippets in capturing the number of words in a document is performed, which increases the confidence of the method proposed by the paper. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

3.
汉字词语的语义相似度计算是中文信息处理中的一个关键问题。文中利用网络搜索引擎提供的信息来计算汉语词对的语义相似性。首先通过程序访问搜索引擎,获取汉字词汇的搜索结果数,并依此实现了相似度计算模型WebPMI;然后描述了根据查询返回的文本片段进行语义相关性分析的模型CODC;最后,结合这个两个模型,给出了文中算法的伪代码。实验结果显示,文中的算法较好地利用了互联网信息,实现了一种较新的汉语词汇语义相似度计算方法,接近于利用词典提供的信息计算相似度的传统算法。  相似文献   

4.
针对传统Web教育主体难以获得高可用教育资源的问题,提出了一种面向语义主题相似度的Web教育资源查询方法。该方法建立了本体概念语义网络(Ontology Concept Semantic Network,OCSN),在此基础上,设计了基于语义主题相似度匹配的概念检索方法:在检索前主动将教育资源根据其语义和主题组织到本体概念语义网络中,然后建立一个基于语义特性的Web教育资源发现的垂直搜索引擎,并通过构造满足条件的相似度函数,将对应的语义距离映射为相似度,有效地提高了查询效率。实验结果表明此方法能够提高Web教育资源的查准率和查全率。  相似文献   

5.
A (page or web) snippet is a document excerpt allowing a user to understand if a document is indeed relevant without accessing it. This paper proposes an effective snippet generation method. A statistical query expansion approach with pseudo-relevance feedback and text summarization techniques are applied to salient sentence extraction for good quality snippets. In the experimental results, the proposed method showed much better performance than other methods including those of commercial Web search engines such as Google and Naver.  相似文献   

6.
杨楠  李亚平 《计算机应用》2019,39(6):1701-1706
对于用户泛化和模糊的查询,将Web搜索引擎返回的列表内容聚类处理,便于用户有效查找感兴趣的内容。由于返回的列表由称为片段(snippet)的短文本组成,而传统的单词频率-逆文档频率(TF-IDF)特征选择模型不能适用于稀疏的短文本,使得聚类性能下降。一个有效的方法就是通过一个外部的知识库对短文本进行扩展。受到基于神经网络词表示方法的启发,提出了通过词嵌入技术的Word2Vec模型对短文本扩展,即采用Word2Vec模型的TopN个最相似的单词用于对片段(snippet)的扩展,扩展文档使得TF-IDF模型特征选择得到聚类性能的提高。同时考虑到通用性单词造成的噪声引入,对扩展文档的TF-IDF矩阵进行了词频权重修正。实验在两个公开数据集ODP239和SearchSnippets上完成,将所提方法和纯snippet无扩展的方法、基于Wordnet的特征扩展方法和基于Wikipedia的特征扩展方法进行了对比。实验结果表明,所提方法在聚类性能方面优于对比方法。  相似文献   

7.
搜索引擎中的聚类浏览技术   总被引:1,自引:0,他引:1  
搜索引擎大多以文档列表的形式将搜索结果显示给用户,随着Web文档数量的剧增,使得用户查找相关信息变得越来越困难,一种解决方法是对搜索结果进行聚类提高其可浏览性。搜索引擎的聚类浏览技术能使用户在更高的主题层次上查看搜索结果,方便地找到感兴趣的信息。本文介绍了搜索引擎的聚类浏览技术对聚类算法的基本要求及其分类方法,研究分析了主要聚类算法及其改进方法的特点,讨论了对聚类质量的评价,最后指出了聚类浏览技术的发展趋势。  相似文献   

8.
搜索引擎作为互联网主要应用之一,能够根据用户需求从互联网资源中检索并返回有效信息。然而,得到的返回列表往往包含广告和失效网页等噪声信息,而这些信息会干扰用户的检索与查询。针对复杂的网页结构特征和丰富的语义信息,提出了一种基于注意力机制和集成学习的网页黑名单判别方法,并采用本方法构建了一种基于集成学习和注意力机制的卷积神经网络(EACNN)模型来过滤无用的网页。首先,根据网页上不同种类的HTML标签数据,构建多个基于注意力机制的卷积神经网络(CNN)基学习器;然后,采用基于网页结构特征的集成学习方法对不同基学习器的输出结果执行不同的权重计算,从而实现EACNN的构建;最后,将EACNN的输出结果作为网页内容分析结果,从而实现网页黑名单的判别。所提方法通过注意力机制来关注网页语义信息,并通过集成学习的方式引入网页结构特征。实验结果表明,与支持向量机(SVM)、K近邻(KNN)、CNN、长短期记忆(LSTM)网络、GRU、结合注意力机制的卷积神经网络(ACNN)等基线模型相比,所提模型在所构建的面向地理信息领域的判别数据集上具有最高的准确率(0.97)、召回率(0.95)和F1分值(0.96),验证了EACNN在网页黑名单判别工作中的优势。  相似文献   

9.
一种基于语义理解的元搜索引擎的研究   总被引:5,自引:0,他引:5  
通过对查询短语的结构分析,发现查询短语通常由关键词和特征词构成。特征词是对网页内容的概括,它预示着网页中包含一组特定的特征词条。基于该思想建立了面向Web网页内容的特征库。以元搜索引擎为研究对象,研究了以Web网页内容特征库为基础实现对查询短语进行语义理解的方法,提出了相关度级别的算法,对库中已收入的特征词进行了查询测试,查准率为86.7%。实验表明,该模型基本实现了对查询短语的理解,对提高搜索引擎的查准率有显著的效果。  相似文献   

10.
Keyword-based Web search is a widely used approach for locating information on the Web. However, Web users usually suffer from the difficulties of organizing and formulating appropriate input queries due to the lack of sufficient domain knowledge, which greatly affects the search performance. An effective tool to meet the information needs of a search engine user is to suggest Web queries that are topically related to their initial inquiry. Accurately computing query-to-query similarity scores is a key to improve the quality of these suggestions. Because of the short lengths of queries, traditional pseudo-relevance or implicit-relevance based approaches expand the expression of the queries for the similarity computation. They explicitly use a search engine as a complementary source and directly extract additional features (such as terms or URLs) from the top-listed or clicked search results. In this paper, we propose a novel approach by utilizing the hidden topic as an expandable feature. This has two steps. In the offline model-learning step, a hidden topic model is trained, and for each candidate query, its posterior distribution over the hidden topic space is determined to re-express the query instead of the lexical expression. In the online query suggestion step, after inferring the topic distribution for an input query in a similar way, we then calculate the similarity between candidate queries and the input query in terms of their corresponding topic distributions; and produce a suggestion list of candidate queries based on the similarity scores. Our experimental results on two real data sets show that the hidden topic based suggestion is much more efficient than the traditional term or URL based approach, and is effective in finding topically related queries for suggestion.  相似文献   

11.
When classifying search queries into a set of target categories, machine learning based conventional approaches usually make use of external sources of information to obtain additional features for search queries and training data for target categories. Unfortunately, these approaches rely on large amount of training data for high classification precision. Moreover, they are known to suffer from inability to adapt to different target categories which may be caused by the dynamic changes observed in both Web topic taxonomy and Web content. In this paper, we propose a feature-free classification approach using semantic distance. We analyze queries and categories themselves and utilizes the number of Web pages containing both a query and a category as a semantic distance to determine their similarity. The most attractive feature of our approach is that it only utilizes the Web page counts estimated by a search engine to provide the search query classification with respectable accuracy. In addition, it can be easily adaptive to the changes in the target categories, since machine learning based approaches require extensive updating process, e.g., re-labeling outdated training data, re-training classifiers, to name a few, which is time consuming and high-cost. We conduct experimental study on the effectiveness of our approach using a set of rank measures and show that our approach performs competitively to some popular state-of-the-art solutions which, however, frequently use external sources and are inherently insufficient in flexibility.  相似文献   

12.
在当前的软件开发环境中,海量的低质量、无意义的代码知识为开发人员进行代码复用造成了阻碍,大大降低了软件开发效率。为了快速准确地为开发人员推荐高质量的代码知识,提出了基于SBERT(sentence-BERT)模型的代码片段推荐方法CSRSB(code snippets recommendation based on sentence-BERT)。该方法首先获取海量的高质量数据来构建代码语料库,并基于深度学习模型SBERT为代码片段对应的自然语言描述和用户输入的自然语言查询生成具有丰富语义的句向量,通过比较点积相似度来实现代码片段的推荐。使用命中率、平均倒数排名和平均准确率这三个常用推荐评估指标与现有相关研究中的方法进行对比来验证该方法的有效性。实验结果表明,CSRSB在有效提高代码片段推荐准确度的同时也能够做到快速推荐。  相似文献   

13.
Text categorization is widely characterized as a multi-label classification problem. Robust modeling of the semantic similarity between a query text and training texts is essential to construct an effective and accurate classifier. In this paper, we systematically investigate the Web page/text classification problem via integrating sparse representation with random measurements. In particular, we first adopt a very sparse data-independent random measurement matrix to map the original high dimensional text feature space to a lower dimensional space without loss of key information. We then propose a generic sparse representation method to obtain the sparse solution by decoding the semantic correlations between the query text and entire training samples. Based on the above method, we also design and examine a series of rules by taking advantage of the sparse coefficients to propagate multiple labels for the given query texts. We have conducted extensive experiments using real-world datasets to examine our proposed approach, and the results show the effectiveness of the proposed approach.  相似文献   

14.
传统的云计算下的可搜索加密算法没有对查询关键词进行语义扩展,导致了用户查询意图与返回结果存在语义偏差,并且对检索结果的相关度排序不够合理,无法满足用户对智能搜索的需求。对此,提出了一种支持语义的可搜索加密方法。该方法利用本体知识库实现了用户查询的语义拓展,并通过语义相似度来控制扩展词的个数,防止因拓展词过多影响检索的精确度。同时,该方法利用文档向量、查询向量分块技术构造出对应的标记向量,以过滤无关文档,并在查询-文档的相似度得分中引入了语义相似度、关键词位置加权评分及关键词-文档相关度等影响因子,实现了检索结果的有效排序。实验结果表明,该方法在提高检索效率的基础上显著改善了检索结果的排序效果,提高了用户满意度。  相似文献   

15.
搜索引擎结果聚类算法研究   总被引:6,自引:1,他引:5  
随着Web文档数量的剧增,搜索引擎也暴露了许多问题,用户不得不在搜索引擎返回的大量文档摘要列表中查找。而对搜索引擎结果聚类能使用户在更高的主题层次上来查看搜索引擎返回的结果。该文提出了搜索引擎结果聚类的几个重要指标并给出了一个新的基于PAT—tree的搜索引擎结果聚类算法。  相似文献   

16.
针对通用搜索引擎缺乏对网页内容的时态表达式的准确抽取及语义查询支持,提出时态语义相关度算法(TSRR)。在通用搜索引擎基础上添加了时态信息抽取和时态信息排序功能,通过引入时态正则表达式规则,抽取查询关键词和网页文档中的时态点或时态区间等时态表达式,综合计算网页内容的文本相关度和时态语义相关度,从而得到网页的最终排序评分。实验表明,应用TSRR算法可以准确而有效地匹配与时态表达式相关的关键词查询。  相似文献   

17.
张书波  张引  张斌  孙达明 《计算机科学》2016,43(Z6):485-488, 496
基于语义资料和局部分析的混合式查询扩展可以同时提供具有语义相关性和时效性的扩展结果,但如何有效地混合不同相似度度量指标是尚未解决的问题。提出了一种基于Copulas框架的混合式查询扩展方法,在统一框架内实现了不同类型相似度度量指标的合并。该方法基于语义分析及词语共现分析方法,分别计算扩展词与用户查询词的语义及统计相似概率,进而在Copulas框架下融合扩展词集,选取最高质量的扩展词形成查询扩展。实验结果表明,该方法充分利用了语义及词语共现分析查询扩展方法的优点,有效地弥补了两者的不足,提高了搜索结果的查准率,具有更优的搜索性能。  相似文献   

18.
PCCS部分聚类分类:一种快速的Web文档聚类方法   总被引:15,自引:1,他引:15  
PCCS是为了帮助Web用户从搜索引擎所返回的大量文档片中筛选出自已所需要的文档,而使用的一种对Web文档进行快速聚类的部分聚类分法,首先对一部分文档进行聚类,然后根据聚类结果形成类模型对其余的文档进行分类,采用交互式的一次改进一个聚类摘选的聚类方法快速地创建一个聚类摘选集,将其余的文档使用Naive-Bayes分类器进行划分,为了提高聚类与分类的效率,提出了一种混合特征选取方法以减少文档表示的维数,重新计算文档中各特征的熵,从中选取具有最大熵值的前若干个特征,或者基于持久分类模型中的特征集来进行特征选取,实验证明,部分聚类方法能够快速,准确地根据文档主题内容组织Web文档,使用户在更高的术题层次上来查看搜索引擎返回的结果,从以主题相似的文档所形成的集簇中选取相关文档。  相似文献   

19.
搜索引擎往往返回给用户一个包含大量文档片段的列表,用户从中筛选出自己所需要的文档。文中提出一种预取代理的方法:对搜索引擎返回的结果进行聚类分析,使得用户以主题的方式来查看结果,满足用户搜索请求的个性化服务;同时对聚类进行评价,推测出用户可能感兴趣的文档,并将它们预取过来,从而减少网络延迟。  相似文献   

20.
As the Internet grows, it becomes essential to find efficient tools to deal with all the available information. Question answering (QA) and text summarization (TS) research fields focus on presenting the information requested by users in a more concise way. In this paper, the appropriateness and benefits of using summaries in semantic QA are analyzed. For this purpose, a combined approach where a TS component is integrated into a Web‐based semantic QA system is developed. The main goal of this paper is to determine to what extent TS can help semantic QA approaches, when using summaries instead of search engine snippets as the corpus for answering questions. In particular, three issues are analyzed: (i) the appropriateness of query‐focused (QF) summarization rather than generic summarization for the QA task, (ii) the suitable length comparing short and long summaries, and (iii) the benefits of using TS instead of snippets for finding the answers, tested within two semantic QA approaches (named entities and semantic roles). The results obtained show that QF summarization is better than generic (58% improvement), short summaries are better than long (6.3% improvement), and the use of TS within semantic QA improves the performance for both named‐entity‐based (10%) and, especially, semantic‐role‐based QA (47.5%). © 2011 Wiley Periodicals, Inc.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号