首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
设计了一种对基于XML描述的软件构件进行聚类的算法(即基于模拟退火的构件聚类算法),该算法通过模拟金属退火基本原理对构件库中的软件构件聚类进行全局优化.构件聚类时,根据一般意义的树间编辑距离,提出一种用于判断基于XML描述的构件间是否相似的度量测度(称为XML编辑距离).利用XML编辑距离,可将构件间相似性度量的时间复杂度限制在多项式级,且能保持构件的XML描述文档的节点语义信息和节点间的祖孙嵌套关系.最后,在构件库测试模型上进行实验,结果证实了基于模拟退火的构件聚类算法在构件查询实践中的可行性和有效性.  相似文献   

2.
一种基于XML文档聚类的XML近似查询算法   总被引:1,自引:0,他引:1       下载免费PDF全文
提出了一种基于XML文档聚类的XML近似查询算法。给出了基于语义的XML文档间距离的计算方法,结合该语义距离,提出了基于网格的八邻域聚类算法对XML数据库进行聚类划分,进而利用在聚类过程中得到的聚类中心对静态有序选择算法的近似查询评估阶段进行优化,使得不用对XML数据库进行完全遍历就能及时返回满足用户需要的查询结果。最后,在汽车外形智能化设计的实验中表明该算法有效地提高了静态有序选择算法的查询效率。  相似文献   

3.
Fast and effective clustering of XML data using structural information   总被引:2,自引:2,他引:0  
This paper presents the incremental clustering algorithm, XML documents Clustering with Level Similarity (XCLS), that groups the XML documents according to structural similarity. A level structure format is introduced to represent the structure of XML documents for efficient processing. A global criterion function that measures the similarity between the new document and existing clusters is developed. It avoids the need to compute the pair-wise similarity between two individual documents and hence saves a huge amount of computing effort. XCLS is further modified to incorporate the semantic meanings of XML tags for investigating the trade-offs between accuracy and efficiency. The empirical analysis shows that the structural similarity overplays the semantic similarity in the clustering process of the structured data such as XML. The experimental analysis shows that the XCLS method is fast and accurate in clustering the heterogeneous documents by structures.  相似文献   

4.
文本聚类是自然语言处理中的一项重要研究课题,主要应用于信息检索和Web挖掘等领域。其中的关键是文本的表示和聚类算法。在层次聚类的基础上,提出了一种新的基于边界距离的层次聚类算法,该方法通过选择两个类间边缘样本点的距离作为类间距离,有效地利用类的边界信息,提高类间距离计算的准确性。综合考虑不同词性特征对文本的贡献,采用多向量模型对文本进行表示。不同文本集上的实验表明,基于边界距离的多向量文本聚类算法取得了较好的性能。  相似文献   

5.
将集成学习方法应用到XML文档聚类中来改进传统聚类算法的不足。提出一种标签与路径相结合的XML文档向量模型,基于这个模型,首先对原始文档集进行多次抽样,在新文档集上进行K均值聚类,然后对得到的聚类中心集合进行层次聚类。在人工数据集和真实数据集上的实验表明,该算法在召回率和精确率上优于K均值算法,并且增强了其鲁棒性。  相似文献   

6.
大多数超椭球聚类(hyper-ellipsoidal clustering,HEC)算法都使用马氏距离作为距离度量,已经证明在该条件下划分聚类的代价函数是常量,导致HEC无法实现椭球聚类.本文说明了使用改进高斯核的HEC算法可以解释为寻找体积和密度都紧凑的椭球分簇,并提出了一种实用HEC算法-K-HEC,该算法能够有效地处理椭球形、不同大小和不同密度的分簇.为实现更复杂形状数据集的聚类,使用定义在核特征空间的椭球来改进K-HEC算法的能力,提出了EK-HEC算法.仿真实验证明所提出算法在聚类结果和性能上均优于K-means算法、模糊C-means算法、GMM-EM算法和基于最小体积椭球(minimum-volume ellipsoids,MVE)的马氏HEC算法,从而证明了本文算法的可行性和有效性.  相似文献   

7.
距离度量对模糊聚类算法FCM的聚类结果有关键性的影响。实际应用中存在这样一种场景,聚类的数据集中存在着一定量的带标签的成对约束集合的辅助信息。为了充分利用这些辅助信息,首先提出了一种基于混合距离学习方法,它能利用这样的辅助信息来学习出数据集合的距离度量公式。然后,提出了一种基于混合距离学习的鲁棒的模糊C均值聚类算法(HR-FCM算法),它是一种半监督的聚类算法。算法HR-FCM既保留了GIFP-FCM(Generalized FCM algorithm with improved fuzzy partitions)算法的鲁棒性等性能,也因为所采用更为合适的距离度量而具有更好的聚类性能。实验结果证明了所提算法的有效性。  相似文献   

8.
We describe a family of heuristics-based clustering strategies to support the merging of XML data from multiple sources. As part of this research, we have developed a comprehensive classification for schematic and semantic conflicts that can occur when reconciling related XML data from multiple sources. Given the fact that element clustering is compute-intensive, especially when comparing large numbers of data elements that exhibit great representational diversity, performance is a critical, yet so far neglected aspect of the merging process. We have developed five heuristics for clustering data in the multi-dimensional metric space. Equivalence of data elements within the individual clusters is determined using several distance functions that calculate the semantic distances among the elements.

The research described in this article is conducted within the context of the Integration Wizard (IWIZ) project at the University of Florida. IWIZ enables users to access and retrieve information from multiple XML-based sources through a consistent, integrated view. The results of our qualitative analysis of the clustering heuristics have validated the feasibility of our approach as well as its superior performance when compared to other similarity search techniques.  相似文献   


9.
基于频繁结构的XML文档聚类   总被引:1,自引:1,他引:0       下载免费PDF全文
研究基于频繁结构的XML文档聚类方法,其频繁结构包括频繁路径和频繁子树。首先介绍一种挖掘XML文档中所有嵌入频繁子树的算法SSTMiner,对SSTMiner算法进行修改,得到FrePathMiner算法和FreTreeMiner算法,分别用于挖掘XML文档中最大频繁路径和最大频繁子树,在此基础上,提出一种凝聚的层次聚类算法XMLCluster,分别以最大频繁路径和最大频繁子树作为XML文档的特征,对文档进行聚类。实验结果表明FrePathMiner算法和FreTreeMiner算法找到频繁结构的数量都比传统的ASPMiner算法多,这就可以为文档聚类提供更多的结构特征,从而获得更高的聚类精度。  相似文献   

10.
The processing and management of XML data are popular research issues. However, operations based on the structure of XML data have not received strong attention. These operations involve, among others, the grouping of structurally similar XML documents. Such grouping results from the application of clustering methods with distances that estimate the similarity between tree structures. This paper presents a framework for clustering XML documents by structure. Modeling the XML documents as rooted ordered labeled trees, we study the usage of structural distance metrics in hierarchical clustering algorithms to detect groups of structurally similar XML documents. We suggest the usage of structural summaries for trees to improve the performance of the distance calculation and at the same time to maintain or even improve its quality. Our approach is tested using a prototype testbed.  相似文献   

11.
Clustering XML documents is extensively used to organize large collections of XML documents in groups that are coherent according to structure and/or content features. The growing availability of distributed XML sources and the variety of high-demand environments raise the need for clustering approaches that can exploit distributed processing techniques. Nevertheless, existing methods for clustering XML documents are designed to work in a centralized way. In this paper, we address the problem of clustering XML documents in a collaborative distributed framework. XML documents are first decomposed based on semantically cohesive subtrees, then modeled as transactional data that embed both XML structure and content information. The proposed clustering framework employs a centroid-based partitional clustering method that has been developed for a peer-to-peer network. Each peer in the network is allowed to compute a local clustering solution over its own data, and to exchange its cluster representatives with other peers. The exchanged representatives are used to compute representatives for the global clustering solution in a collaborative way. We evaluated effectiveness and efficiency of our approach on real XML document collections varying the number of peers. Results have shown that major advantages with respect to the corresponding centralized clustering setting are obtained in terms of runtime behavior, although clustering solutions can still be accurate with a moderately low number of nodes in the network. Moreover, the collaborativeness characteristic of our approach has revealed to be a convenient feature in distributed clustering as found in a comparative evaluation with a distributed non-collaborative clustering method.  相似文献   

12.
依据基于熵的模糊聚类算法(EFC),提出一种改进的基于熵的中心聚类算法,即通过EFC算法得到差异性十分明显的原始数据集的簇心,以这些簇心为中心再次进行聚类分析,通过各点到各中心的距离将各点重新分配到以各中心所代表的集合中。改进的算法不仅可以得到具有紧凑且差异明显的聚类结果,还可以使准确率得到有效提高。实验结果表明,该改进的算法能够实现数据集的有效聚类,相比于EFC算法的聚类结果准确率更高。  相似文献   

13.
文档聚类分析是组织文档的一种有效方法,在信息处理中被广泛应用于未知话题的自动发现并取得不错的效果。本文提出了一个轻量级聚类算法。该算法利用减小原始文档的索引数,来处理大量小文档,并把它们分组到几千个簇,或者通过更改特定参数,将聚类簇的数量减小到几十个。理论分析和实际应用表明,该算法提高了对高维数据和大量小文档处理效率。  相似文献   

14.
在可扩展标记语言(XML)无线数据广播中,数据以XML文档为基本单位进行广播,然而XML文档间的冗余信息会降低带宽资源的利用率。为解决该问题,提出一种有效的调度算法,分析文档合并对数据广播的性能影响,得出文档间亲密度的衡量标准,并将亲密度高的文档进行合并以减少冗余信息。实验结果证明,该算法可以提高无线数据广播性能,节约带宽资源。  相似文献   

15.
引入信息熵的CURE聚类算法   总被引:1,自引:0,他引:1  
为了提高传统CURE(Clustering Using REpresentatives) 聚类算法的质量,引入信息熵对其进行改进。该算法使用K-means算法对样本数据集进行预聚类;采用基于信息熵的相似性度量,利用簇中的元素提供的信息度量不同簇之间的相互关系,并描述数据的分布;在高层、低层聚类阶段,采取不同的选取策略,分别选取相应的代表点。在UCI数据集和人造数据集上的实验结果表明,提出的算法在一定程度上提高了聚类的准确率,且在大型数据集上比传统CURE算法有着更高的聚类效率。  相似文献   

16.
An important approach for image classification is the clustering of pixels in the spectral domain. Fast detection of different land cover regions or clusters of arbitrarily varying shapes and sizes in satellite images presents a challenging task. In this article, an efficient scalable parallel clustering technique of multi-spectral remote sensing imagery using a recently developed point symmetry-based distance norm is proposed. The proposed distributed computing time efficient point symmetry based K-Means technique is able to correctly identify presence of overlapping clusters of any arbitrary shape and size, whether they are intra-symmetrical or inter-symmetrical in nature. A Kd-tree based approximate nearest neighbor searching technique is used as a speedup strategy for computing the point symmetry based distance. Superiority of this new parallel implementation with the novel two-phase speedup strategy over existing parallel K-Means clustering algorithm, is demonstrated both quantitatively and in computing time, on two SPOT and Indian Remote Sensing satellite images, as even K-Means algorithm fails to detect the symmetry in clusters. Different land cover regions, classified by the algorithms for both images, are also compared with the available ground truth information. The statistical analysis is also performed to establish its significance to classify both satellite images and numeric remote sensing data sets, described in terms of feature vectors.  相似文献   

17.
一种融合语义距离的最近邻图像标注方法   总被引:1,自引:0,他引:1  
传统的基于最近邻的图像标注方法效果不佳,主要原因在于提取图像视觉特征时,损失了很多有价值的信息.提出了一种改进的最近邻分类模型.首先利用距离测度学习方法,引入图像的语义类别信息进行训练,生成新的语义距离;然后利用该距离对每一类图像进行聚类,生成多个类内的聚类中心;最后通过计算图像到各个聚类中心的语义距离来构建最近邻分类模型.在构建最近邻分类模型的整个过程中,都使用训练得到的语义距离来计算,这可以有效减少相同图像类内的变动和不同图像类之间的相似所造成的语义鸿沟.在ImageCLEF2012图像标注数据库上进行了实验,将本方法与传统分类模型和最新的方法进行了比较,验证了本方法的有效性.  相似文献   

18.
提出了一种改进的基于对称点距离的蚂蚁聚类算法。该算法不再采用Euclidean距离来计算类内对象的相似性,而是使用新的对称点距离来计算相似性,在处理带有对称性质的数据集时,可以有效地识别给定数据集的聚类数目和合适的划分。在该算法中,用人工蚂蚁代表数据对象,根据算法给定的聚类规则来寻找最合适的聚类划分。最后用本算法与标准的蚂蚁聚类算法分别对不同的数据集进行了聚类实验。实验结果证实了算法的有效性。  相似文献   

19.
词义归纳是解决词义知识获取的重要研究课题,利用聚类算法对词义进行归纳分析是目前最广泛采用的方法。通过比较K-Means聚类算法和EM聚类算法在 各自 词义归纳模型上的优势,提出一种新的融合距离度量和高斯混合模型的聚类算法,以期利用两种聚类算法分别在距离度量和数据分布计算上的优势,挖掘数据的几何特性和正态分布信息在词义聚类分析中的作用,从而提高词义归纳模型的性能。实验结果表明,所提混合聚类算法对于改进词义归纳模型的性能是十分有效的。  相似文献   

20.
Clustering is an important field for making data meaningful at various applications such as processing satellite images, extracting information from financial data or even processing data in social sciences. This paper presents a new clustering approach called Gaussian Density Distance (GDD) clustering algorithm based on distance and density properties of sample space. The novel part of the method is to find best possible clusters without any prior information and parameters. Another novel part of the algorithm is that it forms clusters very close to human clustering perception when executed on two dimensional data. GDD has some similarities with today’s most popular clustering algorithms; however, it uses both Gaussian kernel and distances to form clusters according to data density and shape. Since GDD does not require any special parameters prior to run, resulting clusters do not change at different runs. During the study, an experimental framework is designed for analysis of the proposed clustering algorithm and its evaluation, based on clustering performance for some characteristic data sets. The algorithm is extensively tested using several synthetic data sets and some of the selected results are presented in the paper. Comparative study outcomes produced by other well-known clustering algorithms are also discussed in the paper.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号