首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 390 毫秒
1.
文本挖掘技术是从海量文本信息中获取潜在有用知识的有效途径。传统的文本挖掘方法由于不能有效运用语义信息而难以达到更高的准确度。本体论为语义信息的合理表示和有效组织提供了理论支持和技术手段,把本体引入到商务企业文本检索中,以文本的段落为检索的最小单位,提出了一个信息检索的模型。该模型能从文本中抽取信息而建立本体标识符,用本体标识符来表示文本的段落,从而对检索要求和段落进行语义匹配,最后得到检索结果。  相似文献   

2.
图像语义检索的一个关键问题就是要找到图像底层特征与语义之间的关联,由于文本是表达语义的一种有效手段,因此提出通过研究文本与图像两种模态之间关系来构建反映两者间潜在语义关联的有效模型的思路。基于该模型,可使用自然语言形式(文本语句)来表达检索意图,最终检索到相关图像。该模型基于稀疏典型性相关分析(sparse canonical correlation analysis,简称sparse CCA),按照如下步骤训练得到:首先利用隐语义分析方法构造文本语义空间,然后以视觉词袋(bag of visual words)来表达文本所对应的图像,最后通过Sparse CCA算法找到一个语义相关空间,以实现文本语义与图像视觉单词间的映射。使用稀疏的相关性分析方法可以提高模型可解释性和保证检索结果稳定性。实验结果验证了Sparse CCA方法的有效性,同时也证实了所提出的图像语义检索方法的可行性。  相似文献   

3.
一种基于稀疏典型性相关分析的图像检索方法   总被引:1,自引:0,他引:1  
庄凌  庄越挺  吴江琴  叶振超  吴飞 《软件学报》2012,23(5):1295-1304
图像语义检索的一个关键问题就是要找到图像底层特征与语义之间的关联,由于文本是表达语义的一种有效手段,因此提出通过研究文本与图像两种模态之间关系来构建反映两者间潜在语义关联的有效模型的思路,基于该模型,可使用自然语言形式(文本语句)来表达检索意图,最终检索到相关图像.该模型基于稀疏典型性相关分析(sparse canonical correlation analysis,简称sparse CCA),按照如下步骤训练得到:首先利用隐语义分析方法构造文本语义空间,然后以视觉词袋(bag of visual words)来表达文本所对应的图像,最后通过Sparse CCA算法找到一个语义相关空间,以实现文本语义与图像视觉单词间的映射.使用稀疏的相关性分析方法可以提高模型可解释性和保证检索结果稳定性.实验结果验证了Sparse CCA方法的有效性,同时也证实了所提出的图像语义检索方法的可行性.  相似文献   

4.
微博文本短小、特征稀疏、与用户查询之间存在语义鸿沟的特点会降低语义检索效率。针对该问题,结合文本特征和知识库语义,构建基于潜在语义与图结构的语义检索模型。通过Tversky算法计算基于Hashtag的特征相关度;利用隐含狄利克雷分布算法对Wikipedia语料库训练主题模型,基于JSD距离计算映射到该模型的文本主题相关度;抽取DBpedia中实体及其网络关系连接图,使用SimRank算法计算图中实体间的相关度。综合以上3个结果得到最终相关度。通过短文本和长文本检索对Twitter子集进行实验,结果表明,与基于开放关联数据和图论的方法相比,该模型在评估指标MAP,P@30,R-Prec上分别提高了2.98%,6.40%,5.16%,具有较好的检索性能。  相似文献   

5.
伪相关反馈(PRF)机制是一种自动化的查询扩展(QE)技术,它利用原始查询和初次检索中前N篇文档蕴含的信息构建更加准确的查询,从而进一步提高信息检索系统的性能。但是,现有的面向稠密检索的PRF方法由于对文本的截断处理容易造成语义信息的缺失,而且在检索阶段的空间复杂度较高。针对上述问题,提出了一种基于段落级粒度且适用于长文本稠密检索的PRF方法 Dense-PRF。首先,通过计算语义距离从初次检索的前N篇文档中获得相关段落的向量;其次,对相关段落向量进行平均池化以得到QE项向量;然后,按照权重结合原始查询向量和QE项向量构建新的查询向量;最后,根据新的查询向量得到最终检索结果。在Robust04和WT2G两个经典长文本测试集上将Dense-PRF与基线模型进行了对比实验,相较于模型RepBERT+BM25,Dense-PRF在前20篇文档的准确率和归一化折现累计效益(NDCG)指标上分别提升了1.66、1.32个百分点和2.30、1.91个百分点。实验结果表明Dense-PRF能有效缓解查询与文档词汇不匹配的问题,并提升检索精度。  相似文献   

6.
概率潜在语义检索模型使用统计的方法建立“文档—潜在语义一词”之间概率分布关系并利用这种关系进行检索。本文比较了在概率潜在语义检索模型中不同中文索引技术对检索效果的影响,考察了基于分词、二元和关键词抽取三种不同的索引技术,并和向量空间模型作了对比分析。实验结果表明:在概率潜在语义检索模型中,词的正确切分能提高检索的平均精度。  相似文献   

7.
为了提高三维模型的检索性能,针对当前三维模型检索系统的语义检索功能无法支持用户的主观性描述文字的问题,提出一种基于内容和描述性文本结合的三维模型语义检索方法。该方法首先为三维模型构造语义树;然后,利用语料统计的方法,计算输入的描述性文本和语义树节点扩充信息的相关程度,将相关度较高的一部分节点的三维模型实例提取出来,得到一个经过语义约束的较小的三维模型集合;最后,使用用户输入的三维模型实例在这个经过语义约束的较小的三维模型集合里进行形状相似性匹配,依据匹配度的大小返回给用户三维模型检索结果。实验中,使用WordNet对一些名词的释义作为描述性文本输入。在普林斯顿大学的PSB三维模型数据集上的实验结果表明,该方法在大多数类别中的查准率—查全率性能好于传统的基于内容的三维模型检索方法。  相似文献   

8.
基于改进潜在语义分析的跨语言检索   总被引:1,自引:0,他引:1  
该文采用基于SVD和NMF矩阵分解相结合的改进潜在语义分析的方法为生物医学文献双语摘要进行建模,该模型将英汉双语摘要映射到同一语义空间,不需要外部词典和知识库,建立不同语言之间的对应关系,便于在双语空间中进行检索。该文充分利用医学文献双语摘要语料中的锚信息,通过不同的k值构建多个检索模型,计算每个模型的信任度,使得多个模型都对查询和文本的相似度做出贡献。在语义空间上进行项与项、文本与文本、项与文本之间的相似度计算,实现了双语摘要的跨语言检索,取得了较好的实验效果。  相似文献   

9.
图像语义检索的一个有效解决途径是找到图像底层特征与文本语义之间的关联.文中在核方法和图拉普拉斯矩阵的基础上,提出一种相关空间嵌入算法,并利用文本隐性语义索引和图像特征的视觉单词,构造出文本语义空间与图像特征空间这两个异构空间的相关关系,从而找出文本语义与视觉单词间潜在关联,实现图像的语义检索.文中算法把保持数据流形结构的一致性作为一种先验约束,将文本语义空间和图像特征空间中的数据点嵌入到同一个相关空间中.因此,与典型相关分析算法相比,这种相关嵌入映射不仅可揭示不同数据空间之间存在的相关关系,还可在相关空间中保留原始数据分布结构,从而提高算法的可靠性.实验验证文中算法的有效性,为图像语义检索提供一种可行方法.  相似文献   

10.
廖祥文  刘德元  桂林  程学旗  陈国龙 《软件学报》2018,29(10):2899-2914
观点检索是自然语言处理领域中的一个热点研究课题.现有的观点检索模型在检索过程中往往无法根据上下文将词汇进行知识、概念层面的抽象,在语义层面忽略词汇之间的语义联系,观点层面缺乏观点泛化能力.因此,提出一种融合文本概念化与网络表示的观点检索方法.该方法首先利用知识图谱分别将用户查询和文本概念化到正确的概念空间,并利用网络表示将知识图谱中的词汇节点表示成低维向量,然后根据词向量推出查询和文本的向量并用余弦公式计算用户查询与文本的相关度,接着引入基于统计机器学习的分类方法挖掘文本的观点.最后利用概念空间、网络表示空间以及观点分析结果构建特征,并服务于观点检索模型,相关实验表明,本文提出的检索模型可以有效提高多种检索模型的观点检索性能.其中,基于统一相关模型的观点检索方法在两个实验数据集上相比基准方法在MAP评价指标上分别提升了6.1%和9.3%,基于排序学习的观点检索方法在两个实验数据集上相比于基准方法在MAP评价指标上分别提升了2.3%和14.6%.  相似文献   

11.
全文索引技术在办公自动化系统中的应用研究*   总被引:1,自引:0,他引:1  
基于内容的全文检索技术广泛用于全文数据库中,为解决办公自动化系统中大量文档的快速检索问题,将SQL Server全文索引技术运用于办公自动化系统开发中.首先介绍SQL Server全文检索流程,然后将其运用于办公自动化系统文档管理模块公文搜索的实现中,全文检索用户界面层采用ASP.NET开发,应用业务层采用C#语言.  相似文献   

12.
WEBSOM is a recently developed neural method for exploring full-text document collections, for information retrieval, and for information filtering. In WEBSOM the full-text documents are encoded as vectors in a document space somewhat like in earlier information retrieval methods, but in WEBSOM the document space is formed in an unsupervised manner using the Self-Organizing Map algorithm. In this article the document representations the WEBSOM creates are shown to be computationally efficient approximations of the results of a certain probabilistic model. The probabilistic model incorporates information about the similarity of use of different words to take into account their semantic relations.  相似文献   

13.
Researches in text categorization have been confined to whole-document-level classification, probably due to lack of full-text test collections. However, full-length documents available today in large quantities pose renewed interests in text classification. A document is usually written in an organized structure to present its main topic(s). This structure can be expressed as a sequence of subtopic text blocks, or passages. In order to reflect the subtopic structure of a document, we propose a new passage-level or passage-based text categorization model, which segments a test document into several passages, assigns categories to each passage, and merges the passage categories to the document categories. Compared with traditional document-level categorization, two additional steps, passage splitting and category merging, are required in this model. Using four subsets of the Reuters text categorization test collection and a full-text test collection of which documents are varying from tens of kilobytes to hundreds, we evaluate the proposed model, especially the effectiveness of various passage types and the importance of passage location in category merging. Our results show simple windows are best for all test collections tested in these experiments. We also found that passages have different degrees of contribution to the main topic(s), depending on their location in the test document.  相似文献   

14.
由于半结构文档如XML越来越广泛的应用,在数据库和信息检索领域,对半结构XML数据相似度的研究也变得尤为重要。给定XML文档集D和用户查询q,XML检索即是从D中查找出符合q的XML文档。为了有效地进行XML信息检索,提出了一种新的计算用户查询与XML文档之间相似度的算法。该算法分为三步:基于WordNet对用户查询q进行同义词扩展得到q';将q'和D中的每一篇XML文档都进行数字签名,并通过签名之间的匹配对D进行有效过滤,除去大量不符合用户查询的文档,得到一个文档子集D',[D'?D];对q'与D'中的文档进行精确匹配得到检索结果。  相似文献   

15.
HIRMA results in an integrated environment to query any full-text document base system by natural language sentences, obtaining a document set relevant to the query. Moreover it supports hypertextual navigation into the document base. The system uses content based document representation and retrieval methods.

In this paper the representation framework as well as the retrieval and navigation algorithms used by HIRMA are described. Coverage and portability throughout application domains are supported by the lexical acquisition system ARIOSTO that provides the suitable lexical knowledge and processing methods to extract from raw text the semantic representation of documents content.  相似文献   


16.
This paper reports a document retrieval technique that retrieves machine-printed Latin-based document images through word shape coding. Adopting the idea of image annotation, a word shape coding scheme is proposed, which converts each word image into a word shape code by using a few shape features. The text contents of imaged documents are thus captured by a document vector constructed with the converted word shape code and word frequency information. Similarities between different document images are then gauged based on the constructed document vectors. We divide the retrieval process into two stages. Based on the observation that documents of the same language share a large number of high-frequency language-specific stop words, the first stage retrieves documents with the same underlying language as that of the query document. The second stage then re-ranks the documents retrieved in the first stage based on the topic similarity. Experiments show that document images of different languages and topics can be retrieved properly by using the proposed word shape coding scheme.  相似文献   

17.
18.
Technical documents, which often have complicated structures, are often produced during Architecture/Engineering/Construction (A/E/C) projects and research. Applying information retrieval (IR) techniques directly to long or multi-topic documents often does not lead to satisfactory results. One way to address the problem is to partition each document into several “passages”, and treat each passage as an independent document. In this research, a novel passage partitioning approach is designed. It generates passages according to domain knowledge, which is represented by base domain ontology. Such a passage is herein defined as an OntoPassage. In order to demonstrate the advantage of the OntoPassage partitioning approach, this research implements a concept-based IR system to illustrate the application of such an approach. The research also compares the OntoPassage partitioning approach with several conventional passage partitioning approaches to verify its IR effectiveness. It is shown that, with the proposed OntoPassage approach, IR effectiveness on domain-specific technical reports is as good as conventional passage partitioning approaches. In addition, the OntoPassage approach provides the possibility to display the concepts in each passage, and concept-based IR may thus be implemented.  相似文献   

19.
This paper presents a knowledge-based approach to managing and retrieving personal documents. The dual document models consist of a document type hierarchy and a folder organization. The document type hierarchy is used to capture the layout, logical and conceptual structures of documents. The folder organization mimics the user's real-world document filing system for organizing and storing documents in an office environment. Predicate-based representation of documents is formalized for specifying knowledge about documents. Document filing and retrieval are predicate-driven. The filing criteria for the folders, which are specified in terms of predicates, govern the grouping of frame instances, regardless of their document types. We incorporated the notions of document type hierarchy and folder organization into the multilevel architecture of document storage. This architecture supports various text-based information retrieval techniques and content-based multimedia information retrieval techniques. The paper also proposes a knowledge-based query-preprocessing algorithm, which reduces the search space. For automating the document filing and retrieval, a predicate evaluation engine with a knowledge base is proposed. The learning agent is responsible for acquiring the knowledge needed by the evaluation engine.  相似文献   

20.
Digital libraries (DLs) in general and technical or cultural preservation applications in particular offer a rich set of multimedia objects like audio, music, images, videos, and also 3D models. However, instead of handling these objects consistently as regular documents - in the same way we treat textual documents - most applications handle them differently. Considering that textual documents are only one media type among many, it's clear that this type of document is handled quite specially. A full-text search engine lets users retrieve a specific document based on its content - that is, one or more words that appear in it. Content-based retrieval of other media types is an active research area, and in the case of 3D documents, only pilot applications exist.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号