首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
识别搜索引擎用户的查询意图在信息检索领域是备受关注的研究内容。文中提出一种融合多类特征识别Web查询意图的方法。将Web查询意图识别作为一个分类问题,并从不同类型的资源包括查询文本、搜索引擎返回内容及Web查询日志中抽取出有效的分类特征。在人工标注的真实Web查询语料上采用文中方法进行查询意图识别实验,实验结果显示文中采用的各类特征对于提高查询意图识别的效果皆有一定帮助,综合使用这些特征进行查询意图识别,88。5%的测试查询获得准确的意图识别结果。  相似文献   

2.
一种新的基于Ontology的信息抽取方法*   总被引:12,自引:0,他引:12  
把语法分析和Ontology 结合起来,先利用领域Ontology里的概念、关系、关键字自动生成标注规则(Rule),然后对文章、句子的语法结构进行分析,再利用语法分析的结果和先前生成的标注规则一起对文档进行信息标注与抽取,最后把信息抽取的结果以记录的形式输出。  相似文献   

3.
4.
Active learning (AL) has been shown to be a useful approach to improving the efficiency of the classification process for remote-sensing imagery. Current AL methods are essentially based on pixel-wise classification. In this paper, a new patch-based active learning (PTAL) framework is proposed for spectral-spatial classification on hyperspectral remote-sensing data. The method consists of two major steps. In the initialization stage, the original hyperspectral images are partitioned into overlapping patches. Then, for each patch, the spectral and spatial information as well as the label are extracted. A small set of patches is randomly selected from the data set for annotation, then a patch-based support vector machine (PTSVM) classifier is initially trained with these patches. In the second stage (close-loop stage of query and retraining), the trained PTSVM classifier is combined with one of three query methods, which are margin sampling (MS), entropy query-by-bagging (EQB), and multi-class level uncertainty (MCLU), and is subsequently employed to query the most informative samples from the candidate pool comprising the rest of the patches from the data set. The query selection cycle enables the PTSVM model to select the most informative queries for human annotation. Then, these informative queries are added to the training set. This process runs iteratively until a stopping criterion is met. Finally, the trained PTSVM is employed to patch classification. In order to compare this to pixel-based active learning (PXAL) models, the prediction label of a patch by PTSVM is transformed into a pixel-wise label of a pixel predictor to get the classification maps. Experimental results show better performance of the proposed PTAL methods on classification accuracy and computational time on three different hyperspectral data sets as compared with PXAL methods.  相似文献   

5.
属性抽取可分为对齐和语义标注两个过程,现有对齐方法中部分含有相同标签不同语义的属性会错分到同一个组,而且为了提高语义标注的精度,通常需要大量的人工标注训练集.为此,文中提出结合主动学习的多记录网页属性抽取方法.针对属性错分问题,引入属性的浅层语义,减少相同标签语义不一致的影响.在语义标注阶段,基于网页的文本、视觉和全局特征,采用基于主动学习的SVM分类方法获得带有语义的结构化数据.同时在主动学习的策略选择方面,通过引入样本整体信息,构建基于不确定性度量的策略,选择语义分类预测不准的样本进行标注.实验表明,在论坛、微博等多个数据集上,相比现有方法,文中方法抽取效果更好.  相似文献   

6.
We present an information fusion approach for ground vehicle classification based on the emitted acoustic signal. Many acoustic factors can contribute to the classification accuracy of working ground vehicles. Classification relying on a single feature set may lose some useful information if its underlying sound production model is not comprehensive. To improve classification accuracy, we consider an information fusion diagram, in which various aspects of an acoustic signature are taken into account and emphasized separately by two different feature extraction methods. The first set of features aims to represent internal sound production, and a number of harmonic components are extracted to characterize the factors related to the vehicle’s resonance. The second set of features is extracted based on a computationally effective discriminatory analysis, and a group of key frequency components are selected by mutual information, accounting for the sound production from the vehicle’s exterior parts. In correspondence with this structure, we further put forward a modified Bayesian fusion algorithm, which takes advantage of matching each specific feature set with its favored classifier. To assess the proposed approach, experiments are carried out based on a data set containing acoustic signals from different types of vehicles. Results indicate that the fusion approach can effectively increase classification accuracy compared to that achieved using each individual features set alone. The Bayesian-based decision level fusion is found to be improved than a feature level fusion approach.  相似文献   

7.
针对现有信息检索系统难以按查询需求处理检索文档的问题,提出了一种基于相关反馈的信息检索模型,分析了查询词分解,推导了相关反馈机制和正规化过程,并进一步阐述了文档提取方法。提出的模型通过相关反馈和查询词扩展,克服了传统方法无法计算文档与查询词之间的相似度问题,并能有效地处理检索文档。仿真结果证明了该模型的有效性和可行性。  相似文献   

8.
Due to the steady increase in the number of heterogeneous types of location information on the internet, it is hard to organize a complete overview of the geospatial information for the tasks of knowledge acquisition related to specific geographic locations. The text- and photo-types of geographical dataset contain numerous location data, such as location-based tourism information, therefore defining high dimensional spaces of attributes that are highly correlated. In this work, we utilized text- and photo-types of location information with a novel approach of information fusion that exploits effective image annotation and location based text-mining approaches to enhance identification of geographic location and spatial cognition. In this paper, we describe our feature extraction methods to annotating images, and utilizing text mining approach to analyze images and texts simultaneously, in order to carry out geospatial text mining and image classification tasks. Subsequently, photo-images and textual documents are projected to a unified feature space, in order to generate a co-constructed semantic space for information fusion. Also, we employed text mining approaches to classify documents into various categories based upon their geospatial features, with the aims to discovering relationships between documents and geographical zones. The experimental results show that the proposed method can effectively enhance the tasks of location based knowledge discovery.  相似文献   

9.
将Copulas理论引入文本特征词关联模式挖掘,提出融合Copulas理论和关联规则挖掘的查询扩展算法.从初检文档集中提取前列n篇文档构建伪相关反馈文档集或用户相关反馈文档集,利用基于Copulas理论的支持度和置信度对相关反馈文档集挖掘含有原查询词项的特征词频繁项集和关联规则模式,从这些规则模式中提取扩展词,实现查询扩展.在NTCIR-5 CLIR中英文本语料上的实验表明,文中算法可有效遏制查询主题漂移和词不匹配问题,改善信息检索性能,提升扩展词质量,减少无效扩展词.  相似文献   

10.
事件抽取旨在从非结构化的文本中抽取出事件的信息,并以结构化的形式予以呈现。监督学习作为基础的事件抽取方法往往受制于训练语料规模小、类别分布不平衡和质量参差不齐的问题。同时,传统基于特征工程的事件抽取方法往往会产生错误传递的问题,且特征工程较为复杂。为此,该文提出了一种联合深度学习和主动学习的事件抽取方法。该方法将RNN模型对触发词分类的置信度融入在主动学习的查询函数中,以此在主动学习过程中提高语料标注效率,进而提高实验的最终性能。实验结果显示,这一联合学习方法能够辅助事件抽取性能的提升,但也显示,联合模式仍有较高的提升空间,有待进一步思考和探索。  相似文献   

11.
有效识别各种鸟类目标具有重要的生态环境保护意义。针对不同种类鸟类之间差别细微、识别难度大等问题,提出一种基于语义信息跨层特征融合的细粒度鸟类识别模型。该模型由区域定位网络、特征提取网络和一种跨层特征融合网络(Cross-layer Feature Fusion Network,CFF-Net)组成。区域定位网络在没有局部语义标注的情况下,自动定位出局部有效信息区域;特征提取网络提取局部区域图像特征和全局图像特征;CFF-Net对多个局部和全局特征进行融合,提高最终分类性能。结果表明,该方法在Caltech-UCSD Birds200-2011(CUB200-2011)鸟类公共数据集上,取得了87.8%的分类准确率,高于目前主流的细粒度鸟类识别算法,表现出优异的分类性能。  相似文献   

12.
13.
While several automatic keyphrase extraction (AKE) techniques have been developed and analyzed, there is little consensus on the definition of the task and a lack of overview of the effectiveness of different techniques. Proper evaluation of keyphrase extraction requires large test collections with multiple opinions, currently not available for research. In this paper, we (i) present a set of test collections derived from various sources with multiple annotations (which we also refer to as opinions in the remained of the paper) for each document, (ii) systematically evaluate keyphrase extraction using several supervised and unsupervised AKE techniques, (iii) and experimentally analyze the effects of disagreement on AKE evaluation. Our newly created set of test collections spans different types of topical content from general news and magazines, and is annotated with multiple annotations per article by a large annotator panel. Our annotator study shows that for a given document there seems to be a large disagreement on the preferred keyphrases, suggesting the need for multiple opinions per document. A first systematic evaluation of ranking and classification of keyphrases using both unsupervised and supervised AKE techniques on the test collections shows a superior effectiveness of supervised models, even for a low annotation effort and with basic positional and frequency features, and highlights the importance of a suitable keyphrase candidate generation approach. We also study the influence of multiple opinions, training data and document length on evaluation of keyphrase extraction. Our new test collection for keyphrase extraction is one of the largest of its kind and will be made available to stimulate future work to improve reliable evaluation of new keyphrase extractors.  相似文献   

14.
In information retrieval (IR) research, more and more focus has been placed on optimizing a query language model by detecting and estimating the dependencies between the query and the observed terms occurring in the selected relevance feedback documents. In this paper, we propose a novel Aspect Language Modeling framework featuring term association acquisition, document segmentation, query decomposition, and an Aspect Model (AM) for parameter optimization. Through the proposed framework, we advance the theory and practice of applying high‐order and context‐sensitive term relationships to IR. We first decompose a query into subsets of query terms. Then we segment the relevance feedback documents into chunks using multiple sliding windows. Finally we discover the higher order term associations, that is, the terms in these chunks with high degree of association to the subsets of the query. In this process, we adopt an approach by combining the AM with the Association Rule (AR) mining. In our approach, the AM not only considers the subsets of a query as “hidden” states and estimates their prior distributions, but also evaluates the dependencies between the subsets of a query and the observed terms extracted from the chunks of feedback documents. The AR provides a reasonable initial estimation of the high‐order term associations by discovering the associated rules from the document chunks. Experimental results on various TREC collections verify the effectiveness of our approach, which significantly outperforms a baseline language model and two state‐of‐the‐art query language models namely the Relevance Model and the Information Flow model.  相似文献   

15.
袁柳  张龙波 《计算机应用》2010,30(12):3401-3406
针对已有Web文档语义标注技术在标注完整性方面的缺陷,将潜在狄里克雷分配(LDA)模型用于对Web文档添加语义标注。考虑到Web文档具有明显的领域特征,在传统的LDA模型中嵌入领域信息,提出Domain-enable LDA模型,提高了标注结果的完整性并避免了对词汇主题的强制分配;同时在文档隐含主题和文档所在领域本体概念间建立关联,利用本体概念表达的语义对隐含主题进行准确的解释,使文档的语义清晰化,为文档检索提供有效帮助。根据LDA模型可为每个词汇分配隐含主题的特征,提出多粒度语义标注的概念。在20news-group和WebKB数据集上的实验证明了Domain-enable LDA模型的有效性,并指出对文档进行多粒度标注有助于有效处理不同类型查询。  相似文献   

16.
基于本体的Deep Web数据标注   总被引:3,自引:0,他引:3  
袁柳  李战怀  陈世亮 《软件学报》2008,19(2):237-245
借鉴语义Web领域中深度标注的思想,提出了一种对Web数据库查询结果进行语义标注的方法.为了获得完整且一致的标注结果,将领域本体作为Web数据库遵循的全局模式引入到查询结果语义标注过程中.对查询接口及查询结果特征进行详细分析,并采用查询条件重置的策略,从而确定查询结果数据的语义标记.通过对多个不同领域Web数据库的测试,在具有领域本体支持的条件下,该方法能够对Web数据库查询结果添加正确的语义标记,从而验证了该方法的有效性.  相似文献   

17.
18.
We introduce the task of mapping search engine queries to DBpedia, a major linking hub in the Linking Open Data cloud. We propose and compare various methods for addressing this task, using a mixture of information retrieval and machine learning techniques. Specifically, we present a supervised machine learning-based method to determine which concepts are intended by a user issuing a query. The concepts are obtained from an ontology and may be used to provide contextual information, related concepts, or navigational suggestions to the user submitting the query. Our approach first ranks candidate concepts using a language modeling for information retrieval framework. We then extract query, concept, and search-history feature vectors for these concepts. Using manual annotations we inform a machine learning algorithm that learns how to select concepts from the candidates given an input query. Simply performing a lexical match between the queries and concepts is found to perform poorly and so does using retrieval alone, i.e., omitting the concept selection stage. Our proposed method significantly improves upon these baselines and we find that support vector machines are able to achieve the best performance out of the machine learning algorithms evaluated.  相似文献   

19.
以东昌府区为例,根据区内各种地物不同的光谱特征,利用遥感光谱分析方法,基于ERDAS IMAGINE平台对东昌府区内各种地物分别用监督分类和非监督分类两种方法进行信息提取,对两种方法进行比较,选择效果比较好的一种对东昌府区遥感信息进行提取研究,并对提取结果进行分析。研究表明,基于光谱理论的遥感信息监督分类提取方法在对东昌府区进行信息提取时更为精确。  相似文献   

20.
Retrieving charts from a large corpus is a fundamental task that can benefit numerous applications such as visualization recommendations. The retrieved results are expected to conform to both explicit visual attributes (e.g., chart type, colormap) and implicit user intents (e.g., design style, context information) that vary upon application scenarios. However, existing example-based chart retrieval methods are built upon non-decoupled and low-level visual features that are hard to interpret, while definition-based ones are constrained to pre-defined attributes that are hard to extend. In this work, we propose a new framework, namely WYTIWYR (What-You-Think-Is-What-You-Retrieve), that integrates user intents into the chart retrieval process. The framework consists of two stages: first, the Annotation stage disentangles the visual attributes within the query chart; and second, the Retrieval stage embeds the user's intent with customized text prompt as well as bitmap query chart, to recall targeted retrieval result. We develop aprototype WYTIWYR system leveraging a contrastive language-image pre-training (CLIP) model to achieve zero-shot classification as well as multi-modal input encoding, and test the prototype on a large corpus with charts crawled from the Internet. Quantitative experiments, case studies, and qualitative interviews are conducted. The results demonstrate the usability and effectiveness of our proposed framework.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号