首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 484 毫秒
1.
进入21世纪以来,知识数据大量存储在文档中,但各类文档的粒度和结构不便于知识的加工、整合和管理. 如何从这些无序的、非结构化的数据(知识)源中提取语义,首要任务是将蕴藏在数据、信息中的知识抽取出来,建立文本资源的语义网,采用RDF来表示语义数据,其次采用TFIDF算法计算得出文本特征词的可信度,最后将文本信息录入到数据库中,实现文本类资源的自动分类,最终目的是实现文本资源知识的共享.  相似文献   

2.
基于搜索引擎的知识发现   总被引:3,自引:0,他引:3  
数据挖掘一般用于高度结构化的大型数据库,以发现其中所蕴含的知识。随着在线文本的增多,其中所蕴含的知识也越来越丰富,但是,它们却难以被分析利用。因而,研究一套行之有效的方案发现文本中所蕴含的知识是非常重要的,也是当前重要的研究课题。该文利用搜索引擎Google获取相关Web页面,进行过滤和清洗后得到相关文本,然后,进行文本聚类,利用Episode进行事件识别和信息抽取,数据集成及数据挖掘,从而实现知识发现。最后给出了原型系统,对知识发现进行实践检验,收到了很好的效果。  相似文献   

3.
TTFS:一个倾向性文本过滤系统的设计与实现   总被引:3,自引:0,他引:3  
以往文本过滤的研究主要集中于主题性过滤,然而随着网络的发展,倾向性文本过滤在网络信息安全方面的作用越来越大。论文阐述了一个倾向性文本过滤系统TTFS(Tenclency Text Filtering System),能够对具有关于某个主题的特定倾向的文本进行过滤。该系统充分利用了领域知识,采用了语义模式分析等技术,实验表明其查全率和查准率高,速度较快。  相似文献   

4.
This article provides an overview of, and thematic justification for, the special issue of the journal of Artificial Intelligence and Law entitled “E-Discovery”. In attempting to define a characteristic “AI & Law” approach to e-discovery, and since a central theme of AI & Law involves computationally modeling legal knowledge, reasoning and decision making, we focus on the theme of representing and reasoning with litigators’ theories or hypotheses about document relevance through a variety of techniques including machine learning. We also identify two emerging techniques for enabling users’ document queries to better express the theories of relevance and connect them to documents: social network analysis and a hypothesis ontology.  相似文献   

5.
In the context of information retrieval (IR) from text documents, the term weighting scheme (TWS) is a key component of the matching mechanism when using the vector space model. In this paper, we propose a new TWS that is based on computing the average term occurrences of terms in documents and it also uses a discriminative approach based on the document centroid vector to remove less significant weights from the documents. We call our approach Term Frequency With Average Term Occurrence (TF-ATO). An analysis of commonly used document collections shows that test collections are not fully judged as achieving that is expensive and maybe infeasible for large collections. A document collection being fully judged means that every document in the collection acts as a relevant document to a specific query or a group of queries. The discriminative approach used in our proposed approach is a heuristic method for improving the IR effectiveness and performance and it has the advantage of not requiring previous knowledge about relevance judgements. We compare the performance of the proposed TF-ATO to the well-known TF-IDF approach and show that using TF-ATO results in better effectiveness in both static and dynamic document collections. In addition, this paper investigates the impact that stop-words removal and our discriminative approach have on TF-IDF and TF-ATO. The results show that both, stop-words removal and the discriminative approach, have a positive effect on both term-weighting schemes. More importantly, it is shown that using the proposed discriminative approach is beneficial for improving IR effectiveness and performance with no information on the relevance judgement for the collection.  相似文献   

6.
Documenting software architecture rationale is essential to reuse and evaluate architectures, and several modeling and documentation guidelines have been proposed in the literature. However, in practice creating and updating these documents rarely is a primary activity in most software projects, and rationale remains hidden in casual and semi-structured records, such as e-mails, meeting notes, wikis, and specialized documents. This paper describes the TREx (Toeska Rationale Extraction) approach to recover, represent and explore rationale information from text documents, combining: (1) pattern-based information extraction to recover rationale; (2) ontology-based representation of rationale and architectural concepts; and (3) facet-based interactive exploration of rationale. Initial results from TREx’s application suggest that some kinds of architecture rationale can be semi-automatically extracted from a project’s unstructured text documents, namely decisions, alternatives and requirements. The approach and some tools are illustrated with a case study of rationale recovery for a financial securities settlement system.  相似文献   

7.
8.
Sharing sustainable and valuable knowledge among knowledge workers is a fundamental aspect of knowledge management. In organizations, knowledge workers usually have personal folders in which they organize and store needed codified knowledge (textual documents) in categories. In such personal folder environments, providing knowledge workers with needed knowledge from other workers’ folders is important because it increases the workers’ productivity and the possibility of reusing and sharing knowledge. Conventional recommendation methods can be used to recommend relevant documents to workers; however, those methods recommend knowledge items without considering whether the items are assigned to the appropriate category in the target user’s personal folders. In this paper, we propose novel document recommendation methods, including content-based filtering and categorization, collaborative filtering and categorization, and hybrid methods, which integrate text categorization techniques, to recommend documents to target worker’s personalized categories. Our experiment results show that the hybrid methods outperform the pure content-based and the collaborative filtering and categorization methods. The proposed methods not only proactively notify knowledge workers about relevant documents held by their peers, but also facilitate push-mode knowledge sharing.  相似文献   

9.
A key task for students learning about a complex topic from multiple documents on the web is to establish the existing rhetorical relations between the documents. Traditional search engines such as Google® display the search results in a listed format, without signalling any relationship between the documents retrieved. New search engines such as Kartoo® go a step further, displaying the results as a constellation of documents, in which the existing relations between pages are made explicit. This presentation format is based on previous studies of single-text comprehension, which demonstrate that providing a graphical overview of the text contents and their relation boosts readers’ comprehension of the topic. We investigated the assumption that graphical overviews can also facilitate multiple-documents comprehension. The present study revealed that undergraduate students reading a set of web pages on climate change comprehended them better when using a search engine that makes explicit the relationships between documents (i.e. Kartoo-like) than when working with a list-like presentation of the same documents (i.e. Google-like). The facilitative effect of a graphical-overview interface was reflected in inter-textual inferential tasks, which required students to integrate key information between documents, even after controlling for readers’ topic interest and background knowledge.  相似文献   

10.
Web文本表示方法作为所有Web文本分析的基础工作,对文本分析的结果有深远的影响。提出了一种多维度的Web文本表示方法。传统的文本表示方法一般都是从文本内容中提取特征,而文档的深层次特征和外部特征也可以用来表示文本。本文主要研究文本的表层特征、隐含特征和社交特征,其中表层特征和隐含特征可以由文本内容中提取和学习得到,而文本的社交特征可以通过分析文档与用户的交互行为得到。所提出的多维度文本表示方法具有易用性,可以应用于各种文本分析模型中。在实验中,改进了两种常用的文本聚类算法——K-means和层次聚类算法,并命名为多维度K-means MDKM和多维度层次聚类算法MDHAC。通过大量的实验表明了本方法的高效性。此外,我们在各种特征的结合实验结果中还有一些深层次的发现。  相似文献   

11.
Due to the steady increase in the number of heterogeneous types of location information on the internet, it is hard to organize a complete overview of the geospatial information for the tasks of knowledge acquisition related to specific geographic locations. The text- and photo-types of geographical dataset contain numerous location data, such as location-based tourism information, therefore defining high dimensional spaces of attributes that are highly correlated. In this work, we utilized text- and photo-types of location information with a novel approach of information fusion that exploits effective image annotation and location based text-mining approaches to enhance identification of geographic location and spatial cognition. In this paper, we describe our feature extraction methods to annotating images, and utilizing text mining approach to analyze images and texts simultaneously, in order to carry out geospatial text mining and image classification tasks. Subsequently, photo-images and textual documents are projected to a unified feature space, in order to generate a co-constructed semantic space for information fusion. Also, we employed text mining approaches to classify documents into various categories based upon their geospatial features, with the aims to discovering relationships between documents and geographical zones. The experimental results show that the proposed method can effectively enhance the tasks of location based knowledge discovery.  相似文献   

12.
13.
In a world with vast information overload, well-optimized retrieval of relevant information has become increasingly important. Dividing large, multiple topic spanning documents into sets of coherent subdocuments facilitates the information retrieval process. This paper presents a novel technique to automatically subdivide a textual document into consistent components based on a coherence quantification function. This function is based on stem or term chains linking document entities, such as sentences or paragraphs, based on the reoccurrences of stems or terms. Applying this function on a document results in a coherence graph of the document linking its entities. Spectral graph partitioning techniques are used to divide this coherence graph into a number of subdocuments. A novel technique is introduced to obtain the most suitable number of subdocuments. These subdocuments are an aggregation of (not necessarily adjacent) entities. Performance tests are conducted in test environments based on standardized datasets to prove the algorithm’s capabilities. The relevance of these techniques for information retrieval and text mining is discussed.  相似文献   

14.
The rapid and extensive pervasion of information through the web has enhanced the diffusion of a huge amount of unstructured natural language textual resources. A great interest has arisen in the last decade for discovering, accessing and sharing such a vast source of knowledge. For this reason, processing very large data volumes in a reasonable time frame is becoming a major challenge and a crucial requirement for many commercial and research fields. Distributed systems, computer clusters and parallel computing paradigms have been increasingly applied in the recent years, since they introduced significant improvements for computing performance in data-intensive contexts, such as Big Data mining and analysis. Natural Language Processing, and particularly the tasks of text annotation and key feature extraction, is an application area with high computational requirements; therefore, these tasks can significantly benefit of parallel architectures. This paper presents a distributed framework for crawling web documents and running Natural Language Processing tasks in a parallel fashion. The system is based on the Apache Hadoop ecosystem and its parallel programming paradigm, called MapReduce. In the specific, we implemented a MapReduce adaptation of a GATE application and framework (a widely used open source tool for text engineering and NLP). A validation is also offered in using the solution for extracting keywords and keyphrase from web documents in a multi-node Hadoop cluster. Evaluation of performance scalability has been conducted against a real corpus of web pages and documents.  相似文献   

15.
Secure broadcasting of web documents is becoming a crucial requirement for many web-based applications. Under the broadcast document dissemination strategy, a web document source periodically broadcasts (portions of) its documents to a potentially large community of users, without the need for explicit requests. By secure broadcasting, we mean that the delivery of information to users must obey the access control policies of the document source. Traditional access control mechanisms that have been adapted for XML documents, however, do not address the performance issues inherent in access control. In this paper, a labeling scheme is proposed to support rapid reconstruction of XML documents in the context of a well-known method, called XML pool encryption. The proposed labeling scheme supports the speedy inference of structure information in all portions of the document. The binary representation of the proposed labeling scheme is also investigated. In the experimental results, the proposed labeling scheme is efficient in searching for the location of decrypted information.  相似文献   

16.
With the number of documents describing real-world events and event-oriented information needs rapidly growing on a daily basis, the need for efficient retrieval and concise presentation of event-related information is becoming apparent. Nonetheless, the majority of information retrieval and text summarization methods rely on shallow document representations that do not account for the semantics of events. In this article, we present event graphs, a novel event-based document representation model that filters and structures the information about events described in text. To construct the event graphs, we combine machine learning and rule-based models to extract sentence-level event mentions and determine the temporal relations between them. Building on event graphs, we present novel models for information retrieval and multi-document summarization. The information retrieval model measures the similarity between queries and documents by computing graph kernels over event graphs. The extractive multi-document summarization model selects sentences based on the relevance of the individual event mentions and the temporal structure of events. Experimental evaluation shows that our retrieval model significantly outperforms well-established retrieval models on event-oriented test collections, while the summarization model outperforms competitive models from shared multi-document summarization tasks.  相似文献   

17.

Given an information need and the corresponding set of documents retrieved, it is known that user assessments for such documents differ from one user to another. One frequent reason that is put forward is the discordance between text complexity and user reading fluency. We explore this relationship from three different dimensions: quantitative features, subjective-assessed difficulty, and reader/text factors. In order to evaluate quantitative features, we wondered whether it is possible to find differences between documents that are evaluated by the user and those that are ignored according to the complexity of the document. Secondly, a task related to the evaluation of the relevance of short texts is proposed. For this end, users evaluated the relevance of these short texts by answering 20 queries. Documents complexity and relevance assessments were done previously by some human experts. Then, the relationship between participants assessments, experts assessments and document complexity is studied. Finally, a third experimentation was performed under the prism of neuro-Information Retrieval: while the participants were monitored with an electroencephalogram (EEG) headset, we tried to find a correlation among EEG signal, text difficulty and the level of comprehension of texts being read during the EEG recording. In light of the results obtained, we found some weak evidence showing that users responded to queries according to text complexity and user’s reading fluency. For the second and third group of experiments, we administered a sub-test from the Woodcock Reading Mastery Test to ensure that participants had a roughly average reading fluency. Nevertheless, we think that additional variables should be studied in the future in order to achieve a sound explanation of the interaction between text complexity and user profile.

  相似文献   

18.
The technology of automatic document summarization is maturing and may provide a solution to the information overload problem. Nowadays, document summarization plays an important role in information retrieval. With a large volume of documents, presenting the user with a summary of each document greatly facilitates the task of finding the desired documents. Document summarization is a process of automatically creating a compressed version of a given document that provides useful information to users, and multi-document summarization is to produce a summary delivering the majority of information content from a set of documents about an explicit or implicit main topic. In our study we focus on sentence based extractive document summarization. We propose the generic document summarization method which is based on sentence clustering. The proposed approach is a continue sentence-clustering based extractive summarization methods, proposed in Alguliev [Alguliev, R. M., Aliguliyev, R. M., Bagirov, A. M. (2005). Global optimization in the summarization of text documents. Automatic Control and Computer Sciences 39, 42–47], Aliguliyev [Aliguliyev, R. M. (2006). A novel partitioning-based clustering method and generic document summarization. In Proceedings of the 2006 IEEE/WIC/ACM international conference on web intelligence and intelligent agent technology (WI–IAT 2006 Workshops) (WI–IATW’06), 18–22 December (pp. 626–629) Hong Kong, China], Alguliev and Alyguliev [Alguliev, R. M., Alyguliev, R. M. (2007). Summarization of text-based documents with a determination of latent topical sections and information-rich sentences. Automatic Control and Computer Sciences 41, 132–140] Aliguliyev, [Aliguliyev, R. M. (2007). Automatic document summarization by sentence extraction. Journal of Computational Technologies 12, 5–15.]. The purpose of present paper to show, that summarization result not only depends on optimized function, and also depends on a similarity measure. The experimental results on an open benchmark datasets from DUC01 and DUC02 show that our proposed approach can improve the performance compared to sate-of-the-art summarization approaches.  相似文献   

19.
Cluster analysis is a primary tool for detecting anomalous behavior in real-world data such as web documents, medical records of patients or other personal data. Most existing methods for document clustering are based on the classical vector-space model, which represents each document by a fixed-size vector of weighted key terms often referred to as key phrases. Since vector representations of documents are frequently very sparse, inverted files are used to prevent a tremendous computational overload which may be caused in large and diverse document collections such as pages downloaded from the World Wide Web. In order to reduce computation costs and space complexity, many popular methods for clustering web documents, including those using inverted files, usually assume a relatively small prefixed number of clusters.We propose several new crisp and fuzzy approaches based on the cosine similarity principle for clustering documents that are represented by variable-size vectors of key phrases, without limiting the final number of clusters. Each entry in a vector consists of two fields. The first field refers to a key phrase in the document and the second denotes an importance weight associated with this key phrase within the particular document. Removing the restriction on the total number of clusters, may moderately increase computing costs but on the other hand improves the method’s performance in classifying incoming vectors as normal or abnormal, based on their similarity to the existing clusters. All the procedures represented in this work are characterized by two features: (a) the number of clusters is not restricted by some relatively prefixed small number, i.e., an arbitrary new incoming vector which is not similar to any of the existing cluster centers necessarily starts a new cluster and (b) a vector with multiple appearance n in the training set is counted as n distinct vectors rather than a single vector. These features are the main reasons for the high quality performance of the proposed algorithms. We later describe them in detail and show their implementation in a real-world application from the area of web activity monitoring, in particular, by detecting anomalous documents downloaded from the internet by users with abnormal information interests.  相似文献   

20.
文本挖掘是发现文本中所包含的内容和意义的过程。向量空间模型是文本挖掘中成熟的文本表示模型,而特征项的选择对其性能有着重要的影响。但以前的研究都把目光聚焦于文本中出现的特征项,忽略了文档之间的相关性。这种局限使这些特征项不能提供丰富的语义信息。始于2005年的Web2.0大潮席卷了整个互联网,在此背景下应运而生的社会化标注成了相关文档的语义桥梁,此文本挖掘带来了新的生机。据此本文利用IRF(Iterative Reinforcement Framwork)模型为文档产生了丰富的特征项,大大提高了文档的检索率。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号