首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
中文文档自动分类系统的设计与实现   总被引:30,自引:4,他引:30  
文档自动分类是信息处理领域中的一项重要研究课题。本文阐述了一个中文文档自动分类系统的设计与实现,并着重介绍了系统实现中的一些主要技术问题的处理,如文本分类模型、特征提取、词典构造等。  相似文献   

2.
基于内容的邮件过滤本质是二值文本分类问题。特征选择在分类之前约简特征空间以减少分类器在计算和存储上的开销,同时过滤部分噪声以提高分类的准确性,是影响邮件过滤准确性和时效性的重要因素。但各特征选择算法在同一评价环境中性能不同,且对分类器和数据集分布特征具有依赖性。结合邮件过滤自身特点,从分类器适应性、数据集依赖性及时间复杂度三个方面评价与分析各特征选择算法在邮件过滤领域的性能。实验结果表明,优势率和文档频数用于邮件过滤时垃圾邮件识别的准确率较高,运算时间较少。  相似文献   

3.
Textual databases are useful sources of information and knowledge and if these are well utilised then issues related to future project management and product or service quality improvement may be resolved. A large part of corporate information, approximately 80%, is available in textual data formats. Text Classification techniques are well known for managing on-line sources of digital documents. The identification of key issues discussed within textual data and their classification into two different classes could help decision makers or knowledge workers to manage their future activities better. This research is relevant for most text based documents and is demonstrated on Post Project Reviews (PPRs) which are valuable source of information and knowledge. The application of textual data mining techniques for discovering useful knowledge and classifying textual data into different classes is a relatively new area of research. The research work presented in this paper is focused on the use of hybrid applications of text mining or textual data mining techniques to classify textual data into two different classes. The research applies clustering techniques at the first stage and Apriori Association Rule Mining at the second stage. The Apriori Association Rule of Mining is applied to generate Multiple Key Term Phrasal Knowledge Sequences (MKTPKS) which are later used for classification. Additionally, studies were made to improve the classification accuracies of the classifiers i.e. C4.5, K-NN, Naïve Bayes and Support Vector Machines (SVMs). The classification accuracies were measured and the results compared with those of a single term based classification model. The methodology proposed could be used to analyse any free formatted textual data and in the current research it has been demonstrated on an industrial dataset consisting of Post Project Reviews (PPRs) collected from the construction industry. The data or information available in these reviews is codified in multiple different formats but in the current research scenario only free formatted text documents are examined. Experiments showed that the performance of classifiers improved through adopting the proposed methodology.  相似文献   

4.
中文特征词的选取是中文信息预处理内容之一,对文档分类有重要影响。中文分词处理后,采用特征词构建的向量模型表示文档时,导致特征词的稀疏性和高维性,从而影响文档分类的性能和精度。在分析、总结多种经典文本特征选取方法基础上,以文档频为主,实现文档集中的特征词频及其分布为修正的特征词选取方法(DC)。采用宏F值和微F值为评价指标,通过实验对比证明,该方法的特征选取效果好于经典文本特征选取方法。  相似文献   

5.
This paper presents an innovative solution to model distributed adaptive systems in biomedical environments. We present an original TCBR-HMM (Text Case Based Reasoning-Hidden Markov Model) for biomedical text classification based on document content. The main goal is to propose a more effective classifier than current methods in this environment where the model needs to be adapted to new documents in an iterative learning frame. To demonstrate its achievement, we include a set of experiments, which have been performed on OSHUMED corpus. Our classifier is compared with Naive Bayes and SVM techniques, commonly used in text classification tasks. The results suggest that the TCBR-HMM Model is indeed more suitable for document classification. The model is empirically and statistically comparable to the SVM classifier and outperforms it in terms of time efficiency.  相似文献   

6.
Text mining techniques include categorization of text, summarization, topic detection, concept extraction, search and retrieval, document clustering, etc. Each of these techniques can be used in finding some non-trivial information from a collection of documents. Text mining can also be employed to detect a document’s main topic/theme which is useful in creating taxonomy from the document collection. Areas of applications for text mining include publishing, media, telecommunications, marketing, research, healthcare, medicine, etc. Text mining has also been applied on many applications on the World Wide Web for developing recommendation systems. We propose here a set of criteria to evaluate the effectiveness of text mining techniques in an attempt to facilitate the selection of appropriate technique.  相似文献   

7.
文本索引词项相对权重计算方法与应用   总被引:4,自引:0,他引:4  
文本索引词权重计算方法决定了文本分类的准确率。该文提出一种文本索引词项相对权重计算方法,即文本索引词项权重根据索引词项在该文本中的出现频率与在整个文本空间出现的平均频率之间的相对值进行计算。该方法能有效地提高索引词对文本内容识别的准确性。  相似文献   

8.
Most of the research on text categorization has focused on classifying text documents into a set of categories with no structural relationships among them (flat classification). However, in many information repositories documents are organized in a hierarchy of categories to support a thematic search by browsing topics of interests. The consideration of the hierarchical relationship among categories opens several additional issues in the development of methods for automated document classification. Questions concern the representation of documents, the learning process, the classification process and the evaluation criteria of experimental results. They are systematically investigated in this paper, whose main contribution is a general hierarchical text categorization framework where the hierarchy of categories is involved in all phases of automated document classification, namely feature selection, learning and classification of a new document. An automated threshold determination method for classification scores is embedded in the proposed framework. It can be applied to any classifier that returns a degree of membership of a document to a category. In this work three learning methods are considered for the construction of document classifiers, namely centroid-based, naïve Bayes and SVM. The proposed framework has been implemented in the system WebClassIII and has been tested on three datasets (Yahoo, DMOZ, RCV1) which present a variety of situations in terms of hierarchical structure. Experimental results are reported and several conclusions are drawn on the comparison of the flat vs. the hierarchical approach as well as on the comparison of different hierarchical classifiers. The paper concludes with a review of related work and a discussion of previous findings vs. our findings.  相似文献   

9.
This paper presents a novel over-sampling method based on document content to handle the class imbalance problem in text classification. The new technique, COS-HMM (Content-based Over-Sampling HMM), includes an HMM that is trained with a corpus in order to create new samples according to current documents. The HMM is treated as a document generator which can produce synthetical instances formed on what it was trained with.To demonstrate its achievement, COS-HMM is tested with a Support Vector Machine (SVM) in two medical documental corpora (OHSUMED and TREC Genomics), and is then compared with the Random Over-Sampling (ROS) and SMOTE techniques. Results suggest that the application of over-sampling strategies increases the global performance of the SVM to classify documents. Based on the empirical and statistical studies, the new method clearly outperforms the baseline method (ROS), and offers a greater performance than SMOTE in the majority of tested cases.  相似文献   

10.
基于WEB文本数据挖掘的研究   总被引:8,自引:0,他引:8  
万维网是一个巨大的、分布广泛和全球性的信息服务中心,它涉及新闻、广告、消费信息、金融管理、教育、政府、电子商务和许多其他信息服务。Web文本挖掘系统是挖掘技术的重要应用方向,它是指在给定的分类体系下,根据网页的内容自动判别内容类别的过程。  相似文献   

11.
In this paper, we present an effective approach for grouping text lines in online handwritten Japanese documents by combining temporal and spatial information. With decision functions optimized by supervised learning, the approach has few artificial parameters and utilizes little prior knowledge. First, the strokes in the document are grouped into text line strings according to off-stroke distances. Each text line string, which may contain multiple lines, is segmented by optimizing a cost function trained by the minimum classification error (MCE) method. At the temporal merge stage, over-segmented text lines (caused by stroke classification errors) are merged with a support vector machine (SVM) classifier for making merge/non-merge decisions. Last, a spatial merge module corrects the segmentation errors caused by delayed strokes. Misclassified text/non-text strokes (stroke type classification precedes text line grouping) can be corrected at the temporal merge stage. To evaluate the performance of text line grouping, we provide a set of performance metrics for evaluating from multiple aspects. In experiments on a large number of free form documents in the Tokyo University of Agriculture and Technology (TUAT) Kondate database, the proposed approach achieves the entity detection metric (EDM) rate of 0.8992 and the edit-distance rate (EDR) of 0.1114. For grouping of pure text strokes, the performance reaches EDM of 0.9591 and EDR of 0.0669.  相似文献   

12.
一种基于向量空间模型的多层次文本分类方法   总被引:37,自引:2,他引:37  
本文研究和改进了经典的向量空间模型(VSM)的词语权重计算方法,并在此基础上提出了一种基于向量空间模型的多层次文本分类方法。也就是把各类按照一定的层次关系组织成树状结构,并将一个类中的所有训练文档合并为一个类文档,在提取各类模型时只在同层同一结点下的类文档之间进行比较;而对文档进行自动分类时,首先从根结点开始找到对应的大类,然后递归往下直到找到对应的叶子子类。实验和实际系统表明,该方法具有较高的正确率和召回率。  相似文献   

13.
Searching for similar document has an important role in text mining and document management. In whether similar document search or in other text mining applications generally document classification is focused and class or category that the documents belong to is tried to be determined. The aim of the present study is the investigation of the case which includes the documents that belong to more than one category. The system used in the present study is a similar document search system that uses fuzzy clustering. The situation of belonging to more than one category for the documents is included by this system. The proposed approach consists of two stages to solve multicategories problem. The first stage is to find out the documents belonging to more than one category. The second stage is the determination of the categories to which these found documents belong to. For these two aims -threshold Fuzzy Similarity Classification Method (-FSCM) and Multiple Categories Vector Method (MCVM) are proposed as written order. Experimental results showed that proposed system can distinguish the documents that belong to more than one category efficiently. Regarding to the finding which documents belong to which classes, proposed system has better performance and success than the traditional approach.  相似文献   

14.
通过用于垃圾文本流过滤的在线文本分类研究,提出了一种新的条件概率集成方法。采用语汇序列表示文本,使用索引结构存储分类知识,设计实现了分类模型的在线训练算法和在线分类算法。抽取电子邮件和手机短信的多种文本特征,分别在TREC07P电子邮件语料和真实中文手机短信语料上进行了垃圾信息过滤实验。实验结果表明,提出的方法能够获得很好的垃圾信息过滤效果。  相似文献   

15.
Increasingly, HTML documents are dynamically generated by interactive Web services. To ensure that the client is presented with the newest versions of such documents it is customary to disable client caching causing a seemingly inevitable performance penalty. In the system, dynamic HTML documents are composed of higher-order templates that are plugged together to construct complete documents. We show how to exploit this feature to provide an automatic fine-grained caching of document templates, based on the service source code. A service transmits not the full HTML document but instead a compact JavaScript recipe for a client-side construction of the document based on a static collection of fragments that can be cached by the browser in the usual manner. We compare our approach with related techniques and demonstrate on a number of realistic benchmarks that the size of the transmitted data and the latency may be reduced significantly.  相似文献   

16.
庞超  尹传环 《计算机科学》2018,45(1):144-147, 178
自动文本摘要是自然语言处理领域中一项重要的研究内容,根据实现方式的不同其分为摘录式和理解式,其中理解式文摘是基于不同的形式对原始文档的中心内容和概念的重新表示,生成的文摘中的词语无需与原始文档相同。提出了一种基于分类的理解式文摘模型。该模型将基于递归神经网络的编码-解码结构与分类结构相结合,并充分利用监督信息,从而获得更多的摘要特性;通过在编码-解码结构中使用注意力机制,模型能更精确地获取原文的中心内容。模型的两部分可以同时在大数据集下进行训练优化,训练过程简单且有效。所提模型表现出了优异的自动摘要性能。  相似文献   

17.
In a world with vast information overload, well-optimized retrieval of relevant information has become increasingly important. Dividing large, multiple topic spanning documents into sets of coherent subdocuments facilitates the information retrieval process. This paper presents a novel technique to automatically subdivide a textual document into consistent components based on a coherence quantification function. This function is based on stem or term chains linking document entities, such as sentences or paragraphs, based on the reoccurrences of stems or terms. Applying this function on a document results in a coherence graph of the document linking its entities. Spectral graph partitioning techniques are used to divide this coherence graph into a number of subdocuments. A novel technique is introduced to obtain the most suitable number of subdocuments. These subdocuments are an aggregation of (not necessarily adjacent) entities. Performance tests are conducted in test environments based on standardized datasets to prove the algorithm’s capabilities. The relevance of these techniques for information retrieval and text mining is discussed.  相似文献   

18.
The use of declarative languages in digital TV systems, as well as IPTV systems, facilitates the creation of interactive applications. However, when an application becomes more complex, with many user interactions, for example, the hypermedia document that describes that application becomes bigger, having many lines of XML code. Thus, specification reuse is crucial for an efficient application development process. This paper proposes the XTemplate 3.0 language, which allows the creation of NCL hypermedia composite templates. Templates define generic structures of nodes and links to be added to a document composition, providing spatio-temporal synchronization semantics to it. The use of hypermedia composite templates aims at facilitating the authoring work, allowing the reuse of hypermedia document common specifications. Using composite templates, hypermedia documents become simpler and easier to be created. The 3.0 version of XTemplate adds new facilities to the XTemplate language, such as the possibility of specifying presentation information, the attribution of values to variables and connector parameters during template processing time and the template ability to extend other templates. As an application of XTemplate, this work extends the NCL 3.0 declarative language with XTemplate, adding semantics to NCL contexts and providing document structure reuse. In addition, this paper also presents two authoring tools: the template processor and the wizard to create NCL documents using templates. The wizard tool allows the author to choose a template included in a template base and create an NCL document using that template. The template processor transforms an NCL document using templates into a standard NCL 3.0 document according to digital TV and IPTV standards.  相似文献   

19.
针对卫星星座健康状态管理文档涉及多项遥测参数的查询和计算、文档格式要求严格、编制工作量巨大、人工耗时较长的问题,提出了一种卫星星座健康状态管理文档自动生成方法.通过对文档中所含的基本数据类型进行归类分析,制定配置文件存储规则,对文档模板进行自定义设置,并应用文档自动生成算法,利用文档模板及相关参数生成数据汇总文档.该方法能够实现文档编制过程中的知识复用和通用内容生成,建立规范有效的文档编制流程.  相似文献   

20.
Designers usually begin with a database to look for historical design solution, available experience and techniques through design documents, when initiating a new design. This database is a collection of labeled design documents under a few of predefined categories. However, little work has been done on labeling a relatively small number of design documents for information organization, so that most of design documents in this database can be automatically categorized.This paper initiates a study on this topic and proposes a methodology in four steps: design document collection, documents labeling, finalization of documents labeling and categorization of design database. Our discussion in this paper focuses on the first three steps. The key of this method is to collect relatively small number of design documents for manual labeling operation, and unify the effective labeling results as the final labels in terms of labeling agreement analysis and text classification experiment. Then these labeled documents are utilized as training samples to construct classifiers, which can automatically give appropriate labels to each design document.With this method, design documents are labeled in terms of the consensus of operators’ understanding, and design information can be organized in a comprehensive and universally accessible way. A case study of labeling robotic design documents is used to demonstrate the proposed methodology. Experimental results show that this method can significantly benefit efficient design information search.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号