首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This correspondence presents a novel hierarchical clustering approach for knowledge document self-organization, particularly for patent analysis. Current keyword-based methodologies for document content management tend to be inconsistent and ineffective when partial meanings of the technical content are used for cluster analysis. Thus, a new methodology to automatically interpret and cluster knowledge documents using an ontology schema is presented. Moreover, a fuzzy logic control approach is used to match suitable document cluster(s) for given patents based on their derived ontological semantic webs. Finally, three case studies are used to test the approach. The first test case analyzed and clustered 100 patents for chemical and mechanical polishing retrieved from the World Intellectual Property Organization (WIPO). The second test case analyzed and clustered 100 patent news articles retrieved from online Web sites. The third case analyzed and clustered 100 patents for radio-frequency identification retrieved from WIPO. The results show that the fuzzy ontology-based document clustering approach outperforms the K-means approach in precision, recall, F-measure, and Shannon's entropy.  相似文献   

2.
Since engineering design is heavily informational, engineers want to retrieve existing engineering documents accurately during the product development process. However, engineers have difficulties searching for documents because of low retrieval accuracy. One of the reasons for this is the limitation of existing document ranking approaches, in which relationships between terms in documents are not considered to assess the relevance of the retrieved documents. Therefore, we propose a new ranking approach that provides more correct evaluation of document relevance to a given query. Our approach exploits domain ontology to consider relationships among terms in the relevance scoring process. Based on domain ontology, the semantics of a document are represented by a graph (called Document Semantic Network) and, then, proposed relation-based weighting schemes are used to evaluate the graph to calculate the document relevance score. In our ranking approach, user interests and searching intent are also considered in order to provide personalized services. The experimental results show that the proposed approach outperforms existing ranking approaches. A precisely represented semantics of a document as a graph and multiple relation-based weighting schemes are important factors underlying the notable improvement.  相似文献   

3.
于琦  周勇 《微机发展》2008,18(2):34-37
本体是概念模型的明确的规范说明,能够精确地描述概念体系和领域知识。为了将异构数据源中的数据识别出来并进行语义相关的集成,提出了一种基于本体集成异构数据源的方法。首先将各个数据源中的数据以XML文档形式进行描述,然后将各个XML文档的文档类型定义(DTD)转化为DIM数据模型表示,最后通过语义聚类、全局模式生成等步骤,实现XML文档的基于本体的语义集成。文中提出的方法以普林斯顿大学的心理学家、语言学家和计算机工程师联合设计的一种基于认知语言学的英语词典为本体库,可有效地识别出异构数据源中的具有等价语义或相近语义的数据,从而更准确地对异构数据源中的数据进行集成。  相似文献   

4.
Web document cluster analysis plays an important role in information retrieval by organizing large amounts of documents into a small number of meaningful clusters. Traditional web document clustering is based on the Vector Space Model (VSM), which takes into account only two-level (document and term) knowledge granularity but ignores the bridging paragraph granularity. However, this two-level granularity may lead to unsatisfactory clustering results with “false correlation”. In order to deal with the problem, a Hierarchical Representation Model with Multi-granularity (HRMM), which consists of five-layer representation of data and a two-phase clustering process is proposed based on granular computing and article structure theory. To deal with the zero-valued similarity problem resulted from the sparse term-paragraph matrix, an ontology based strategy and a tolerance-rough-set based strategy are introduced into HRMM. By using granular computing, structural knowledge hidden in documents can be more efficiently and effectively captured in HRMM and thus web document clusters with higher quality can be generated. Extensive experiments show that HRMM, HRMM with tolerance-rough-set strategy, and HRMM with ontology all outperform VSM and a representative non VSM-based algorithm, WFP, significantly in terms of the F-Score.  相似文献   

5.
本体构造就是利用各种数据源以半自动方式新建或扩充改编已有本体以构建一个新本体。现有的本体构造方法大都以大量领域文本和背景语料库为基础抽取大量概念术语,然后从中选出领域概念构造出一个本体。Cluster-Merge算法首先对领域文档先用k-means聚类算法进行聚类,然后根据文档聚类的结果来构造本体,最后根据本体相似度进行本体合并得到最终的输出本体。通过实验可证明用Cluster-Merge算法得出的本体可以提高查全率、查准率。  相似文献   

6.
针对大多数基于向量空间模型的中文文本聚类算法存在高维稀疏、忽略词语之间的语义联系、缺少聚簇描述等问题,提出基于语义列表的中文文本聚类算法CTCAUSL(Chinese text clustering algorithm using semantic list)。该算法采用语义列表表示文本,一个文本的语义列表中的词是该文本中出现的词,从而降低了数据维数,且不存在稀疏问题;同时利用词语间的相似度计算解决了同义词近义词的问题;最后用语义列表对聚簇进行描述,增加了聚类结果的可读性。实验结果表明,CTCAUSL算法在处理大量文本数据方面具有较好的性能,并能明显提高中文文本聚类的准确性。  相似文献   

7.
On ontology-driven document clustering using core semantic features   总被引:2,自引:1,他引:1  
Incorporating semantic knowledge from an ontology into document clustering is an important but challenging problem. While numerous methods have been developed, the value of using such an ontology is still not clear. We show in this paper that an ontology can be used to greatly reduce the number of features needed to do document clustering. Our hypothesis is that polysemous and synonymous nouns are both relatively prevalent and fundamentally important for document cluster formation. We show that nouns can be efficiently identified in documents and that this alone provides improved clustering. We next show the importance of the polysemous and synonymous nouns in clustering and develop a unique approach that allows us to measure the information gain in disambiguating these nouns in an unsupervised learning setting. In so doing, we can identify a core subset of semantic features that represent a text corpus. Empirical results show that by using core semantic features for clustering, one can reduce the number of features by 90% or more and still produce clusters that capture the main themes in a text corpus.  相似文献   

8.
基于混合并行遗传算法的文本聚类研究   总被引:2,自引:0,他引:2  
针对传统K-Means聚类算法对初始聚类中心的选择敏感,易陷入局部最优解的问题,提出一种基于混合并行遗传算法的文本聚类方法。该方法首先将文档集合表示成向量空间模型,并在文档向量中随机选择初始聚类中心形成染色体,然后结合K-Means算法的高效性和并行遗传算法的全局优化能力,通过种群内的遗传、变异和种群间的并行进化、联姻,有效地避免了局部最优解的出现。实验表明该算法相对于K-Means算法、简单遗传算法等文本聚类方法具有更高的精确度和全局寻优能力。  相似文献   

9.
提出了一种基于频繁子树模式的GML文档结构聚类算法GCFS(GML Clustering based on Frequent Subtree patterns),与其他相关算法不同,该算法首先挖掘GML文档集合中的最大与闭合频繁Induced子树,并将其作为聚类特征,根据频繁子树的大小赋予不同的权值,采用余弦函数定义相似度,利用K-Means算法对聚类特征进行聚类。实验结果表明算法GCFS是有效的,具有较高的聚类效率,性能优于其他同类算法。  相似文献   

10.
With the rapid growth of text documents, document clustering technique is emerging for efficient document retrieval and better document browsing. Recently, some methods had been proposed to resolve the problems of high dimensionality, scalability, accuracy, and meaningful cluster labels by using frequent itemsets derived from association rule mining for clustering documents. In order to improve the quality of document clustering results, we propose an effective Fuzzy Frequent Itemset-based Document Clustering (F2IDC) approach that combines fuzzy association rule mining with the background knowledge embedded in WordNet. A term hierarchy generated from WordNet is applied to discover generalized frequent itemsets as candidate cluster labels for grouping documents. We have conducted experiments to evaluate our approach on Classic4, Re0, R8, and WebKB datasets. Our experimental results show that our proposed approach indeed provide more accurate clustering results than prior influential clustering methods presented in recent literature.  相似文献   

11.
基于本体的Web智能检索研究   总被引:1,自引:0,他引:1       下载免费PDF全文
尹焕亮  孙四明  张峰 《计算机工程》2009,35(23):44-46,4
针对传统的基于关键词信息检索方式存在的问题,提出一种基于领域本体的语义检索模型,在建立本体概念与文档内容关联关系的基础上,对用户的查询输入预处理,利用本体计算两者的相似程度,给出与查询请求相关的排序后的文档。通过搭建基于本体的Web智能检索原型系统,验证了该模型的有效性。  相似文献   

12.
实例驱动的自适应本体学习   总被引:1,自引:0,他引:1       下载免费PDF全文
针对知识管理中本体构建存在的问题,将聚类算法与ODP(Open Directory Project)目录有机结合,给出了一种基于知识资源元数据的自适应本体学习方法。根据元数据对文档进行聚类形成本体概念,将生成的概念分别映射到ODP中确定概念间的层次关系,生成初始本体;根据内聚性和相关性的变化进行自适应本体学习,实现本体更新和概念丰富,以及时跟踪知识的变化。提出的自适应本体学习方法能够很好地反映研究领域的演变过程和发展趋势,满足知识型组织进行知识管理和研究人员共享知识的需求。实验结果表明了方法的有效性。  相似文献   

13.
The creation and deployment of knowledge repositories for managing, sharing, and reusing tacit knowledge within an organization has emerged as a prevalent approach in current knowledge management practices. A knowledge repository typically contains vast amounts of formal knowledge elements, which generally are available as documents. To facilitate users' navigation of documents within a knowledge repository, knowledge maps, often created by document clustering techniques, represent an appealing and promising approach. Various document clustering techniques have been proposed in the literature, but most deal with monolingual documents (i.e., written in the same language). However, as a result of increased globalization and advances in Internet technology, an organization often maintains documents in different languages in its knowledge repositories, which necessitates multilingual document clustering (MLDC) to create organizational knowledge maps. Motivated by the significance of this demand, this study designs a Latent Semantic Indexing (LSI)-based MLDC technique capable of generating knowledge maps (i.e., document clusters) from multilingual documents. The empirical evaluation results show that the proposed LSI-based MLDC technique achieves satisfactory clustering effectiveness, measured by both cluster recall and cluster precision, and is capable of maintaining a good balance between monolingual and cross-lingual clustering effectiveness when clustering a multilingual document corpus.  相似文献   

14.
ABSTRACT

Text clustering is an important topic in text mining. One of the most effective methods for text clustering is an approach based on frequent itemsets (FIs), and thus, there are many related algorithms that aim to improve the accuracy of text clustering. However, these do not focus on the weights of terms in documents, even though the frequency of each term in each document has a great impact on the results. In this work, we propose a new method for text clustering based on frequent weighted utility itemsets (FWUI). First, we calculate the Term Frequency (TF) for each term in documents to create a weight matrix for all documents. The weights of terms in documents are based on the Inverse Document Frequency. Next, we use the Modification Weighted Itemset Tidset (MWIT)-FWUI algorithm for mining FWUI from a number matrix and the weights of terms in documents. Finally, based on frequent utility itemsets, we cluster documents using the MC (Maximum Capturing) algorithm. The proposed method has been evaluated on three data sets consisting of 1,600 documents covering 16 topics. The experimental results show that our method, using FWUI, improves the accuracy of the text clustering compared to methods using FIs.  相似文献   

15.
王梅  宋晓晖  刘勇  许传海 《计算机应用》2022,42(11):3330-3336
针对K-Means聚类算法利用均值更新聚类中心,导致聚类结果受样本分布影响的问题,提出了神经正切核K-Means聚类算法(NTKKM)。首先通过神经正切核(NTK)将输入空间的数据映射到高维特征空间,然后在高维特征空间中进行K-Means聚类,并采用兼顾簇间与簇内距离的方法更新聚类中心,最后得到聚类结果。在car和breast-tissue数据集上,对NTKKM聚类算法的准确率、调整兰德系数(ARI)及FM指数这3个评价指标进行统计。实验结果表明,NTKKM聚类算法的聚类效果以及稳定性均优于K?Means聚类算法和高斯核K-Means聚类算法。NTKKM聚类算法与传统的K-Means聚类算法相比,准确率分别提升了14.9%和9.4%,ARI分别提升了9.7%和18.0%,FM指数分别提升了12.0%和12.0%,验证了NTKKM聚类算法良好的聚类性能。  相似文献   

16.
Statistical semantics for enhancing document clustering   总被引:1,自引:1,他引:0  
Document clustering algorithms usually use vector space model (VSM) as their underlying model for document representation. VSM assumes that terms are independent and accordingly ignores any semantic relations between them. This results in mapping documents to a space where the proximity between document vectors does not reflect their true semantic similarity. This paper proposes new models for document representation that capture semantic similarity between documents based on measures of correlations between their terms. The paper uses the proposed models to enhance the effectiveness of different algorithms for document clustering. The proposed representation models define a corpus-specific semantic similarity by estimating measures of term–term correlations from the documents to be clustered. The corpus of documents accordingly defines a context in which semantic similarity is calculated. Experiments have been conducted on thirteen benchmark data sets to empirically evaluate the effectiveness of the proposed models and compare them to VSM and other well-known models for capturing semantic similarity.  相似文献   

17.
18.
In this paper, a visual similarity based document layout analysis (DLA) scheme is proposed, which by using clustering strategy can adaptively deal with documents in different languages, with different layout structures and skew angles. Aiming at a robust and adaptive DLA approach, the authors first manage to find a set of representative filters and statistics to characterize typical texture patterns in document images, which is through a visual similarity testing process. Texture features are then extracted from these filters and passed into a dynamic clustering procedure, which is called visual similarity clustering. Finally, text contents are located from the clustered results. Benefit from this scheme, the algorithm demonstrates strong robustness and adaptability in a wide variety of documents, which previous traditional DLA approaches do not possess.  相似文献   

19.
针对从关系数据库模式学习所得的OWL本体大都是轻量级的,其概念层次结构过于扁平,很难被直接用于实际的本体应用,提出一种新颖的OWL本体进化方法。其通过形式概念分析对已有轻量级OWL本体进行概念聚类,根据概念等同度、概念包含度计算,自动提出丰富、修改本体概念语义关系的建议,从而辅助设计者实现本体进化。该方法将FCA与相似度计算结合使用,既发挥FCA语义强度较高的特点,又发挥相似度计算执行效率高且容易实现的特点;同时,规避了相似度计算语义强度较低与FCA实现较为困难且执行效率较低的不足。一个实例结果的评估证实  相似文献   

20.
K-Means聚类算法的结果质量依赖于初始聚类中心的选择。该文将局部搜索的思想引入K-Means算法,提出一种改进的KMLS算法。该算法对K-Means收敛后的结果使用局部搜索来使其跳出局部极值点,进而再次迭代求优。同时对局部搜索的结果使用K-Means算法使其尽快到达一个局部极值点。理论分析证明了算法的可行性和有效性,而在标准文本集上的文本聚类实验表明,相对于传统的K-Means算法,该算法改进了聚类结果的质量。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号