首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
段瑞 《计算机应用研究》2020,37(4):1049-1053
为了提高从企业模型库中查询检索模型的效率,提出一种基于变迁图编辑距离的流程相似性算法。首先,给出了变迁图的概念及其生成方法;其次,提出边的长度概念,且删除和插入边的代价由该边的长度决定,基于此定义出图编辑操作及其代价,并用节点匹配算法计算最小图编辑距离;然后,给出两个过程模型的相似性概念和计算方法;最后,通过实验验证了算法的正确性且满足七条相似性性质,并验证了变迁图编辑距离满足四条距离性质。  相似文献   

2.
子图匹配是图数据查询处理技术中的一个重要研究问题。针对现有子图匹配算法运行效率不高且缺乏通用优化方法的现状,提出一种基于社区结构的子图匹配算法优化方法(community structure based subgraph matching optimization method,CSO)。首先,提出两种优化策略,即解析模式图信息以减少子图匹配过程的计算量,以及利用社区结构信息在子图匹配过程中进行剪枝;然后,结合上述两种优化策略提出基于社区结构的子图匹配算法优化方法,并进行了理论分析。真实数据集和合成数据集上的大量实验结果表明,CSO方法能有效减少子图匹配算法的时间开销。同时,不同规模数据集上的实验结果验证了CSO方法良好的可扩展性。  相似文献   

3.
基于IsoRank算法实现了耳廓剖分图的匹配,进而实现了基于耳廓三维形状的身份鉴别.基于主成分分析提取待匹配三维耳廓上的关键点,构造耳廓关键点的三维网格图;基于IsoRank算法求2个关键点三维网格图结点之间的对应关系,实现耳廓关键点的图匹配.由于采用了IsoRank算法,耳廓关键点网格图得到了全局对齐,两耳廓之间的整体匹配得到最大化.实验结果表明,基于IsoRank算法的耳廓匹配方法具有较低的时间复杂度以及较高的匹配精度和匹配效率.  相似文献   

4.
针对计算机图数据处理难题中的图数据检索匹配问题。相比传统的基于统计分布、模式识别等理论,该文在研究了遗传算法的智能优化过程的基础上,对照图匹配过程中的对应信息元素的查找难题进行求解。将遗传算法的思想理论与图匹配方法相结合,利用智能优化算法对解决基于内容的图匹配问题探索提供新的解决方法,从智能优化的角度来考虑和快速解决图匹配过程中的结构对应检索难点。通过验证参数和对象得出图匹配问题新解。  相似文献   

5.
针对基于图像进行三维重建技术在使用大规模图像集合进行重建时,需要对图像集合中图像进行两两匹配耗时问题,提出了基于哈希技术对图像构建全局哈希特征的方法,通过过滤掉无效的图像关系对来减少计算时间,极大地提高了大规模图像集合三维重建的匹配计算效率。提出的大规模图像快速哈希匹配算法包括构建图像哈希特征、构建初始匹配图、挑选候选匹配对、哈希匹配几个步骤。实验结果表明该方法能显著地提高三维重建中图像匹配的速度。  相似文献   

6.
针对基于编辑距离的字符串模糊匹配方法搜索效率较低的问题,通过对字符串模糊匹配过程进行分析,利用并行化技术对大数据量的字符串模糊匹配过程进行优化.同时由于计算字符串间编辑距离算法性能较低,提出利用字符串过滤规则对待搜索字符串集合进行过滤后再进行模糊匹配的改进方法.实验结果表明,改进后的方法具有较高的执行效率并取得了较好的召回率和精度.  相似文献   

7.
针对目前视频服务场景下的电影资源中存在海量的关系型数据,现有的基于图相关的推荐算法需要将这些关系型数据映射成图结构后进行处理,由于图数据规模较大造成了传统的图数据处理方法中语义匹配算法的效率降低、通信开销增大的问题,本文融合关联性分析提出了一种基于语义匹配的图数据加速处理方案——一种在单一大图中查询图序列的子图匹配加速方法。该方法通过考虑时间因素的关联性来加快定位到海量数据中有效信息所在的范围,从而达到缓解数据查找效率低、通信开销大的问题;同时,对该方法进行了实验分析,验证其有效性。  相似文献   

8.
对提出的基于马氏距离的点匹配方法进行了理论分析与实验验证,针对马氏距离及加权图转换匹配方法的不足,将马氏距离融入到加权图转换匹配算法中,提出了一种新的稳健的图像匹配策略——基于马氏距离加权图转换的图像匹配算法。该算法利用图中的点及其K-近邻点的马氏距离中值和角度距离建立权重矩阵,根据不断更新得到的权值更新图,逐个剔除出格点,获得更加精确的匹配结果。仿真数据和真实图像实验对比结果表明该方法的可行性和鲁棒性。  相似文献   

9.
连玮 《计算机应用》2012,32(9):2564-2567
针对旋转不变的弹性点匹配问题,提出一种基于图匹配的算法。对两点集分别构造边集合,然后定向的形状上下文距离和边长度的差别被用于度量两点集的边之间的相似性。基于边的相似性,点对应关系通过求解一个图匹配问题而恢复。实验结果表明该算法可以获得很好的配准结果并且鲁棒、高效。  相似文献   

10.
利用尺度不变特征变换(SIFT)算法识别盒装乳制品时易产生误匹配,从而影响识别的准确率。为了消除误匹配点的影响并精确识别商品的种类和数量,提出了一种改进的SIFT误匹配点剔除方法。根据盒装乳制品图像形变较小、多数为刚性变换的特点,首先利用粗匹配对的主方向角度差进行筛选,再计算出模板图和测试图各自特征点两两之间的距离比,标记距离比出现异常的匹配点,最后通过投票剔除误匹配点。在自建商品图像数据库上将所提方法与改进的随机抽样一致性算法、基于图的消除误匹配点方法进行对比测试,结果表明,所提方法在匹配准确率和误剔除率方面有明显改善。  相似文献   

11.
A survey of graph edit distance   总被引:1,自引:0,他引:1  
Inexact graph matching has been one of the significant research foci in the area of pattern analysis. As an important way to measure the similarity between pairwise graphs error-tolerantly, graph edit distance (GED) is the base of inexact graph matching. The research advance of GED is surveyed in order to provide a review of the existing literatures and offer some insights into the studies of GED. Since graphs may be attributed or non-attributed and the definition of costs for edit operations is various, the existing GED algorithms are categorized according to these two factors and described in detail. After these algorithms are analyzed and their limitations are identified, several promising directions for further research are proposed.  相似文献   

12.
The concept of graph edit distance constitutes one of the most flexible graph matching paradigms available. The major drawback of graph edit distance, viz. the exponential time complexity, has been recently overcome by means of a reformulation of the edit distance problem to a linear sum assignment problem. However, the substantial speed up of the matching is also accompanied by an approximation error on the distances. Major contribution of this paper is the introduction of a transformation process in order to convert the underlying cost model into a utility model. The benefit of this transformation is that it enables the integration of additional information in the assignment process. We empirically confirm the positive effects of this transformation on five benchmark graph sets with respect to the accuracy and run time of a distance based classifier.  相似文献   

13.
A special class of graphs is introduced in this paper. The graphs belonging to this class are characterised by the existence of unique node labels. A number of matching algorithms for graphs with unique node labels are developed. It is shown that problems such as graph isomorphism, subgraph isomorphism, maximum common subgraph (MCS) and graph edit distance (GED) have a computational complexity that is only quadratic in the number of nodes. Moreover, computing the median of a set of graphs is only linear in the cardinality of the set. In a series of experiments, it is demonstrated that the proposed algorithms run very fast in practice. The considered class makes the matching of large graphs, consisting of thousands of nodes, computationally tractable. We also discuss an application of the considered class of graphs and related matching algorithms to the classification and detection of abnormal events in computer networks.  相似文献   

14.
Graph edit distance is a powerful and flexible method for error-tolerant graph matching. Yet it can only be calculated for small graphs in practice due to its exponential time complexity when considering unconstrained graphs. In this paper we propose a quadratic time approximation of graph edit distance based on Hausdorff matching. In a series of experiments we analyze the performance of the proposed Hausdorff edit distance in the context of graph classification and compare it with a cubic time algorithm based on the assignment problem. Investigated applications include nearest neighbor classification of graphs representing letter drawings, fingerprints, and molecular compounds as well as hidden Markov model classification of vector space embedded graphs representing handwriting. In many cases, a substantial speedup is achieved with only a minor loss in accuracy or, in one case, even with a gain in accuracy. Overall, the proposed Hausdorff edit distance shows a promising potential in terms of flexibility, efficiency, and accuracy.  相似文献   

15.
Graph edit distance from spectral seriation   总被引:3,自引:0,他引:3  
This paper is concerned with computing graph edit distance. One of the criticisms that can be leveled at existing methods for computing graph edit distance is that they lack some of the formality and rigor of the computation of string edit distance. Hence, our aim is to convert graphs to string sequences so that string matching techniques can be used. To do this, we use a graph spectral seriation method to convert the adjacency matrix into a string or sequence order. We show how the serial ordering can be established using the leading eigenvector of the graph adjacency matrix. We pose the problem of graph-matching as a maximum a posteriori probability (MAP) alignment of the seriation sequences for pairs of graphs. This treatment leads to an expression in which the edit cost is the negative logarithm of the a posteriori sequence alignment probability. We compute the edit distance by finding the sequence of string edit operations which minimizes the cost of the path traversing the edit lattice. The edit costs are determined by the components of the leading eigenvectors of the adjacency matrix and by the edge densities of the graphs being matched. We demonstrate the utility of the edit distance on a number of graph clustering problems.  相似文献   

16.
Detecting duplicate entities in biological data is an important research task. In this paper, we propose a novel and context-sensitive Markov random field-based edit distance (MRFED) for this task. We apply the Markov random field theory to the Needleman–Wunsch distance and combine MRFED with TFIDF, a token-based distance algorithm, resulting in SoftMRFED. We compare SoftMRFED with other distance algorithms such as Levenshtein, SoftTFIDF, and Monge–Elkan for two matching tasks: biological entity matching and synonym matching. The experimental results show that SoftMRFED significantly outperforms the other edit distance algorithms on several test data collections. In addition, the performance of SoftMRFED is superior to token-based distance algorithms in two matching tasks.  相似文献   

17.
Graph matching and graph edit distance have become important tools in structural pattern recognition. The graph edit distance concept allows us to measure the structural similarity of attributed graphs in an error-tolerant way. The key idea is to model graph variations by structural distortion operations. As one of its main constraints, however, the edit distance requires the adequate definition of edit cost functions, which eventually determine which graphs are considered similar. In the past, these cost functions were usually defined in a manual fashion, which is highly prone to errors. The present paper proposes a method to automatically learn cost functions from a labeled sample set of graphs. To this end, we formulate the graph edit process in a stochastic context and perform a maximum likelihood parameter estimation of the distribution of edit operations. The underlying distortion model is learned using an Expectation Maximization algorithm. From this model we finally derive the desired cost functions. In a series of experiments we demonstrate the learning effect of the proposed method and provide a performance comparison to other models.  相似文献   

18.
Although graph matching and graph edit distance computation have become areas of intensive research recently, the automatic inference of the cost of edit operations has remained an open problem. In the present paper, we address the issue of learning graph edit distance cost functions for numerically labeled graphs from a corpus of sample graphs. We propose a system of self-organizing maps (SOMs) that represent the distance measuring spaces of node and edge labels. Our learning process is based on the concept of self-organization. It adapts the edit costs in such a way that the similarity of graphs from the same class is increased, whereas the similarity of graphs from different classes decreases. The learning procedure is demonstrated on two different applications involving line drawing graphs and graphs representing diatoms, respectively.  相似文献   

19.
一种自适应信息集成方法   总被引:1,自引:0,他引:1  
检测相似重复记录是信息集成中的关键任务之一,尽管已经提出了各种检测相似重复记录的方法,但字符串匹配算法是这些检测方法中的核心。在提出的自适应信息集成算法中,用一个综合了编辑距离和标记距离的混合相似度去度量字符串之间的相似度。为了避免由于表达方式的差异而造成的字符串之间的不匹配,字符串被分割成独立的单词后按单词的第一个字符进行排序。在单词的匹配中,对拼写错误和缩写有一定的容错功能。实验结果表明,自适应信息集成方法比用Smith Waterman和Jaro距离有更高的正确率。  相似文献   

20.
Graphs are widely used to model complicated data semantics in many applications in bioinformatics, chemistry, social networks, pattern recognition, etc. A recent trend is to tolerate noise arising from various sources such as erroneous data entries and find similarity matches. In this paper, we study graph similarity queries with edit distance constraints. Inspired by the $q$ -gram idea for string similarity problems, our solution extracts paths from graphs as features for indexing. We establish a lower bound of common features to generate candidates. Efficient algorithms are proposed to handle three types of graph similarity queries by exploiting both matching and mismatching features as well as degree information to improve the filtering and verification on candidates. We demonstrate the proposed algorithms significantly outperform existing approaches with extensive experiments on real and synthetic datasets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号