首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
一种通过内容和结构查询文档数据库的方法   总被引:4,自引:0,他引:4       下载免费PDF全文
文档是有一定逻辑结构的,标题、章节、段落等这些概念是文档的内在逻辑.不同的用户对文档的检索,有不同的需求,检索系统如何提供有意义的信息,一直是研究的中心任务.结合文档的结构和内容,对结构化文件的检索,提出了一种新的计算相似度的方法.这种方法可以提供多粒度的文档内容的检索,包括从单词、短语到段落或者章节.基于这种方法实现了一个问题回答系统,测试集是微软的百科全书Encarta,通过与传统方法实验比较,证明通过这种方法检索的文章片断更合理、更有效.  相似文献   

2.
In this paper, we advance a technique to develop a user profile for information retrieval through knowledge acquisition techniques. The profile bridges the discrepancy between user-expressed keywords and system-recognizable index terms. The approach presented in this paper is based on the application of personal construct theory to determine a user's vocabulary and his/her view of different documents in a training set. The elicited knowledge is used to develop a model for each phrase/concept given by the user by employing machine learning techniques.Our model correlates the concepts in a user's vocabulary to the index terms present in the documents in the training set. Computation of dependence between the user phrases also contributes in the development of the user profile and in creating a classification of documents. The resulting system is capable of automatically identifying the user concepts and query translation to index terms computed by the conventional indexing process. The system is evaluated by using the standard measures of precision and recall by comparing its performance against the performance of the smart system for different queries.This research is supported by the NSF grant IRI-8805875.  相似文献   

3.
To find a document in the sea of information, you must embark on a search process, usually computer-aided. In the traditional information retrieval model, the final goal is to identify and collect a small number of documents to read in detail. In this case, a single query yielding a scalar indication of relevance usually suffices. In contrast, document corpus management seeks to understand what is happening in the collection of documents as a whole (i.e. to find relationships among documents). You may indeed read or skim individual documents, but only to better understand the rest of the document set. Document corpus management seeks to identify trends, discover common links and find clusters of similar documents. The results of many single queries must be combined in various ways so that you can discover trends. We describe a new system called the Stereoscopic Field Analyzer (SFA) that aids in document corpus management by employing 3D volumetric visualization techniques in a minimally immersive real-time interaction style. This interactive information visualization system combines two-handed interaction and stereoscopic viewing with glyph-based rendering of the corpora contents. SFA has a dynamic hypertext environment for text corpora, called Telltale, that provides text indexing, management and retrieval based on n-grams (n character sequences of text). Telltale is a document management and information retrieval engine which provides document similarity measures (n-gram-based m-dimensional vector inner products) visualized by SFA for analyzing patterns and trends within the corpus  相似文献   

4.
《Knowledge》2000,13(5):285-296
Machine-learning techniques play the important roles for information filtering. The main objective of machine-learning is to obtain users' profiles. To decrease the burden of on-line learning, it is important to seek suitable structures to represent user information needs. This paper proposes a model for information filtering on the Web. The user information need is described into two levels in this model: profiles on category level, and Boolean queries on document level. To efficiently estimate the relevance between the user information need and documents, the user information need is treated as a rough set on the space of documents. The rough set decision theory is used to classify the new documents according to the user information need. In return for this, the new documents are divided into three parts: positive region, boundary region, and negative region. An experimental system JobAgent is also presented to verify this model, and it shows that the rough set based model can provide an efficient approach to solve the information overload problem.  相似文献   

5.
TEXPROS (TEXt PROcessing System) is an automatic document processing system which supports text-based information representation and manipulation, conveying meanings from stored information within office document texts. A dual modeling approach is employed to describe office documents and support document search and retrieval. The frame templates for representing document classes are organized to form a document type hierarchy. Based on its document type, the synopsis of a document is extracted to form its corresponding frame instance. According to the user predefined criteria, these frame instances are stored in different folders, which are organized as a folder organization (i.e., repository of frame instances associated with their documents). The concept of linking folders establishes filing paths for automatically filing documents in the folder organization. By integrating document type hierarchy and folder organization, the dual modeling approach provides efficient frame instance access by limiting the searches to those frame instances of a document type within those folders which appear to be the most similar to the corresponding queries.This paper presents an agent-based document filing system using folder organization. A storage architecture is presented to incorporate the document type hierarchy, folder organization and original document storage into a three-level storage system. This folder organization supports effective filing strategy and allows rapid frame instance searches by confining the search to the actual predicate-driven retrieval method. A predicate specification is proposed for specifying criteria on filing paths in terms of user predefined predicates for governing the document filing. A method for evaluating whether a given frame instance satisfies the criteria of a filing path is presented. The basic operations for constructing and reorganizing a folder organization are proposed.  相似文献   

6.
More people than ever before have access to information with the World Wide Web; information volume and number of users both continue to expand. Traditional search methods based on keywords are not effective, resulting in large lists of documents, many of which unrelated to users’ needs. One way to improve information retrieval is to associate meaning to users’ queries by using ontologies, knowledge bases that encode a set of concepts about one domain and their relationships. Encoding a knowledge base using one single ontology is usual, but a document collection can deal with different domains, each organized into an ontology. This work presents a novel way to represent and organize knowledge, from distinct domains, using multiple ontologies that can be related. The model allows the ontologies, as well as the relationships between concepts from distinct ontologies, to be represented independently. Additionally, fuzzy set theory techniques are employed to deal with knowledge subjectivity and uncertainty. This approach to organize knowledge and an associated query expansion method are integrated into a fuzzy model for information retrieval based on multi-related ontologies. The performance of a search engine using this model is compared with another fuzzy-based approach for information retrieval, and with the Apache Lucene search engine. Experimental results show that this model improves precision and recall measures.  相似文献   

7.
A Knowledge-Based Approach to Effective Document Retrieval   总被引:3,自引:0,他引:3  
This paper presents a knowledge-based approach to effective document retrieval. This approach is based on a dual document model that consists of a document type hierarchy and a folder organization. A predicate-based document query language is proposed to enable users to precisely and accurately specify the search criteria and their knowledge about the documents to be retrieved. A guided search tool is developed as an intelligent natural language oriented user interface to assist users formulating queries. Supported by an intelligent question generator, an inference engine, a question base, and a predicate-based query composer, the guided search collects the most important information known to the user to retrieve the documents that satisfy users' particular interests. A knowledge-based query processing and search engine is devised as the core component in this approach. Algorithms are developed for the search engine to effectively and efficiently retrieve the documents that match the query.  相似文献   

8.
In an empirical user study, we assessed two approaches to ranking the results from a keyword search using semantic contextual match based on Latent Semantic Analysis. These techniques involved searches initiated from words found in a seed document within a corpus. The first approach used the sentence around the search query in the document as context while the second used the entire document. With a corpus of 20,000 documents and a small proportion of relevant documents (<0.1%), both techniques outperformed a conventional keyword search on a recall-based information retrieval (IR) task. These context-based techniques were associated with a reduction in the number of searches conducted, an increase in users’ precision and, to a lesser extent, an increase in recall. This improvement was strongest when the ranking was based on document, rather than sentence, context. Individuals were more effective on the IR task when the lists returned by the techniques were ranked better. User performance on the task also correlated with achievement on a generalized IQ test but not on a linguistic ability test.  相似文献   

9.
We seek to leverage an expert user's knowledge about how information is organized in a domain and how information is presented in typical documents within a particular domain-specific collection, to effectively and efficiently meet the expert's targeted information needs. We have developed the semantic components model to describe important semantic content within documents. The semantic components model for a given collection (based on a general understanding of the type of information needs expected) consists of a set of document classes, where each class has an associated set of semantic components. Each semantic component instance consists of segments of text about a particular aspect of the main topic of the document and may not correspond to structural elements in the document. The semantic components model represents document content in a manner that is complementary to full text and keyword indexing. This paper describes how the semantic components model can be used to improve an information retrieval system. We present experimental evidence from a large interactive searching study that compared the use of semantic components in a system with full text and keyword indexing, where we extended the query language to allow users to search using semantic components, to a base system that did not have semantic components. We evaluate the systems from a system perspective, where semantic components were shown to improve document ranking for precision-oriented searches, and from a user perspective. We also evaluate the systems from a session-based perspective, evaluating not only the results of individual queries but also the results of multiple queries during a single interactive query session.  相似文献   

10.
Kwong  Linus W.  Ng  Yiu-Kai 《World Wide Web》2003,6(3):281-303
To retrieve Web documents of interest, most of the Web users rely on Web search engines. All existing search engines provide query facility for users to search for the desired documents using search-engine keywords. However, when a search engine retrieves a long list of Web documents, the user might need to browse through each retrieved document in order to determine which document is of interest. We observe that there are two kinds of problems involved in the retrieval of Web documents: (1) an inappropriate selection of keywords specified by the user; and (2) poor precision in the retrieved Web documents. In solving these problems, we propose an automatic binary-categorization method that is applicable for recognizing multiple-record Web documents of interest, which appear often in advertisement Web pages. Our categorization method uses application ontologies and is based on two information retrieval models, the Vector Space Model (VSM) and the Clustering Model (CM). We analyze and cull Web documents to just those applicable to a particular application ontology. The culling analysis (i) uses CM to find a virtual centroid for the records in a Web document, (ii) computes a vector in a multi-dimensional space for this centroid, and (iii) compares the vector with the predefined ontology vector of the same multi-dimensional space using VSM, which we consider the magnitudes of the vectors, as well as the angle between them. Our experimental results show that we have achieved an average of 90% recall and 97% precision in recognizing Web documents belonged to the same category (i.e., domain of interest). Thus our categorization discards very few documents it should have kept and keeps very few it should have discarded.  相似文献   

11.
Analyzing retrieval model performance using retrievability (maximizing findability of documents) has recently evolved as an important measurement for recall-oriented retrieval applications. Most of the work in this domain is either focused on analyzing retrieval model bias or proposing different retrieval strategies for increasing documents retrievability. However, little is known about the relationship between retrievability and other information retrieval effectiveness measures such as precision, recall, MAP and others. In this study, we analyze the relationship between retrievability and effectiveness measures. Our experiments on TREC chemical retrieval track dataset reveal that these two independent goals of information retrieval, maximizing retrievability of documents and maximizing effectiveness of retrieval models are quite related to each other. This correlation provides an attractive alternative for evaluating, ranking or optimizing retrieval models’ effectiveness on a given corpus without requiring any ground truth available (relevance judgments).  相似文献   

12.
时雷  席磊  段其国 《计算机科学》2007,34(10):228-229
本文提出了一种基于粗糙集理论的个性化web搜索系统。用户偏好文件中对关键字进行分组以表示用户兴趣类别。利用粗糙集理论处理自然语言的内在含糊性,根据用户偏好文件对查询条件进行扩展。搜索组件使用扩展后的查询条件搜索相关信息。为了进一步排除不相关信息,排序组件计算查询条件和搜索结果之间的相似程度,根据计算值对搜索结果进行排序。与传统搜索引擎进行了比较,实验结果表明,该系统有效地提高了搜索结果的精度,满足了用户的个性化需求。  相似文献   

13.
针对经典粗糙集模型难以分类标引空间以及体现类间关联的缺陷,将条件概率关系结合粗糙集理论引入信息检索,提出一种基于概率粗糙集的信息检索模型。定义标引词空间的条件概率关系,自动挖掘概念相似类形成概念空间。定义文档与查询、文档与文档间语义贴近度的计算方法。根据贴近度实现检索匹配结果的排序输出。仿真实例表明了该方法的可行性和有效性。  相似文献   

14.
目前对于查询相似度的计算通常是从比对检索结果与查询式的相似度来考虑。本文提出一种基于贝叶斯分类的算法来计算XML查询结果相似度。在计算出每个检索结果文档与查询式相似度的基础上,使用贝叶斯分类器将XML检索文档分类成相关与不相关两个集合,再由计算相关文档与不相关文档的相似度来决定最终的相似度值。最后,通过实验分析表明,在不影响查全率的前提下,这样得到的相似度计算精度比传统方法高15%左右,有效地提高了检索性能。  相似文献   

15.
Most Web search engines use the content of the Web documents and their link structures to assess the relevance of the document to the user’s query. With the growth of the information available on the web, it becomes difficult for such Web search engines to satisfy the user information need expressed by few keywords. First, personalized information retrieval is a promising way to resolve this problem by modeling the user profile by his general interests and then integrating it in a personalized document ranking model. In this paper, we present a personalized search approach that involves a graph-based representation of the user profile. The user profile refers to the user interest in a specific search session defined as a sequence of related queries. It is built by means of score propagation that allows activating a set of semantically related concepts of reference ontology, namely the ODP. The user profile is maintained across related search activities using a graph-based merging strategy. For the purpose of detecting related search activities, we define a session boundary recognition mechanism based on the Kendall rank correlation measure that tracks changes in the dominant concepts held by the user profile relatively to a new submitted query. Personalization is performed by re-ranking the search results of related queries using the user profile. Our experimental evaluation is carried out using the HARD 2003 TREC collection and showed that our session boundary recognition mechanism based on the Kendall measure provides a significant precision comparatively to other non-ranking based measures like the cosine and the WebJaccard similarity measures. Moreover, results proved that the graph-based search personalization is effective for improving the search accuracy.  相似文献   

16.
A new customized document categorization scheme using rough membership   总被引:1,自引:0,他引:1  
One of the problems that plague document ranking is the inherent ambiguity, which arises due to the nuances of natural language. Though two documents may contain the same set of words, their relevance may be very different to a single user, since the context of the words usually determines the relevance of a document. Context of a document is very difficult to model mathematically other than through user preferences. Since it is difficult to perceive all possible user interests a priori and install filters for the same at the server side, we propose a rough-set-based document filtering scheme which can be used to build customized filters at the user end. The documents retrieved by a traditional search engine can then be filtered automatically by this agent and the user is not flooded with a lot of irrelevant material. A rough-set-based classificatory analysis is used to learn the user's bias for a category of documents. This is then used to filter out irrelevant documents for the user. To do this we have proposed the use of novel rough membership functions for computing the membership of a document to various categories.  相似文献   

17.
Semantic search has been one of the motivations of the semantic Web since it was envisioned. We propose a model for the exploitation of ontology-based knowledge bases to improve search over large document repositories. In our view of information retrieval on the semantic Web, a search engine returns documents rather than, or in addition to, exact values in response to user queries. For this purpose, our approach includes an ontology-based scheme for the semiautomatic annotation of documents and a retrieval system. The retrieval model is based on an adaptation of the classic vector-space model, including an annotation weighting algorithm, and a ranking algorithm. Semantic search is combined with conventional keyword-based retrieval to achieve tolerance to knowledge base incompleteness. Experiments are shown where our approach is tested on corpora of significant scale, showing clear improvements with respect to keyword-based search  相似文献   

18.
由于半结构文档如XML越来越广泛的应用,在数据库和信息检索领域,对半结构XML数据相似度的研究也变得尤为重要。给定XML文档集D和用户查询q,XML检索即是从D中查找出符合q的XML文档。为了有效地进行XML信息检索,提出了一种新的计算用户查询与XML文档之间相似度的算法。该算法分为三步:基于WordNet对用户查询q进行同义词扩展得到q';将q'和D中的每一篇XML文档都进行数字签名,并通过签名之间的匹配对D进行有效过滤,除去大量不符合用户查询的文档,得到一个文档子集D',[D'?D];对q'与D'中的文档进行精确匹配得到检索结果。  相似文献   

19.
Large-scale information retrieval with latent semantic indexing   总被引:9,自引:0,他引:9  
As the amount of electronic information increases, traditional lexical (or Boolean) information retrieval techniques will become less useful. Large, heterogeneous collections will be difficult to search since the sheer volume of unranked documents returned in response to a query will overwhelm the user. Vector-space approaches to information retrieval, on the other hand, allow the user to search for concepts rather than specific words, and rank the results of the search according to their relative similarity to the query. One vector-space approach, Latent Semantic Indexing (LSI), has achieved up to 30% better retrieval performance than lexical searching techniques by employing a reduced-rank model of the term-document space. However, the original implementation of LSI lacked the execution efficiency required to make LSI useful for large data sets. A new implementation of LSI, LSI++, seeks to make LSI efficient, extensible, portable, and maintainable. The LSI++ Application Programming Interface (API) allows applications to immediately use LSI without knowing the implementation details of the underlying system. LSI++ supports both serial and distributed searching of large data sets, providing the same programming interface regardless of the implementation actually executing. In addition, a World Wide Web interface was created to allow simple, intuitive searching of document collections using LSI++. Timing results indicate that the serial implementation of LSI++ searches up to six times faster than the original implementation of LSI, while the parallel implementation searches nearly 180 times faster on large document collections.  相似文献   

20.
基于本体的信息检索是实现知识检索的有效途径,针对目前本体支持的形式化概念还不足以表示不完备知识的问题,提出一种基于Rough本体的信息检索方法,该方法中本体以本体信息系统的形式表示。用户提交关键字查询后,首先结合基于关键字检索的方法在预先定义的语义文档空间中搜索文档,然后利用关联搜索的方法来搜索与关键词关联的个体集和属性集,以属性集作为等价类构造Rough本体的近似空间,最后通过近似空间计算个体集和文档集的相似度,根据相似度高低对文档排序。实验表明,此方法比基于关键字和基于经典本体的方法有更高的查准率。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号