首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
It is important to extract minutiae of a fingerprint for the implementation of an auto fingerprint identification system. In this paper, the principal graph algorithm proposed by Kegl is used to obtain principal curves, which can be served as the skeletons of a fingerprint. Based on the obtained principal curves, a minutiae extraction algorithm is proposed to extract minutiae of the fingerprint. The experimental results indicate that principal curves obtained from the principal graph algorithm are smoother than the ones obtained from thinning algorithm, and the minutiae extracted by the proposed algorithm are more efficient.  相似文献   

2.
Learning and design of principal curves   总被引:21,自引:0,他引:21  
Principal curves have been defined as “self-consistent” smooth curves which pass through the “middle” of a d-dimensional probability distribution or data cloud. They give a summary of the data and also serve as an efficient feature extraction tool. We take a new approach by defining principal curves as continuous curves of a given length which minimize the expected squared distance between the curve and points of the space randomly chosen according to a given distribution. The new definition makes it possible to theoretically analyze principal curve learning from training data and it also leads to a new practical construction. Our theoretical learning scheme chooses a curve from a class of polygonal lines with k segments and with a given total length to minimize the average squared distance over n training points drawn independently. Convergence properties of this learning scheme are analyzed and a practical version of this theoretical algorithm is implemented. In each iteration of the algorithm, a new vertex is added to the polygonal line and the positions of the vertices are updated so that they minimize a penalized squared distance criterion. Simulation results demonstrate that the new algorithm compares favorably with previous methods, both in terms of performance and computational complexity, and is more robust to varying data models  相似文献   

3.
大型网络中近似子图匹配研究   总被引:1,自引:0,他引:1       下载免费PDF全文
为降低噪声对近似子图匹配准确率的影响,提出一种改进的近似子图匹配方法。在预处理阶段,利用k-近邻顶点集为数据图中的每个顶点建立标签-权重向量索引。在查询过程中,基于单个近邻标签的权重距离和所有近邻标签的整体匹配程度进行两级过滤,生成顶点候选集,采用生成树匹配和图匹配的方式确定查询图在大型网络中的位置。在真实数据集上的实验结果表明,该方法具有较高的执行效率和匹配准确率。  相似文献   

4.
提出一种鲁棒的平面简单闭合曲线离散采样与重建算法。算法分为采样过程和重 建过程两部分。采样部分首先对平面闭合曲线均匀取点,然后计算各点到曲线所围平面区域中 轴的最近距离,最后根据所求距离确定采样间隔,获取采样点集;重建部分首先构建采样点集 的Delaunay 三角剖分,然后从得到的三角形中选择边构建初始化图形,最后通过修改该图形获 得重建图形。实验表明算法得到的采样点较少且能反映曲线的局部几何特性,重建图形能够较 好地表示原闭合曲线的形状及走向。  相似文献   

5.
提出了一种分水岭变换和结合空间信息的FCM聚类相结合的图像分割方法。方法采用基于图论的结合区域特征信息和空间信息的距离度量,以分水岭变换得到的图像分割小区域为节点构建一个连通加权图,通过计算图上不同节点之间的最短路径来度量不同区域之间的相似程度,从而实现过分割小区域的合并。该方法综合考虑了区域的特征之间的差异和空间位置的差异,与传统的FCM聚类方法在特征空间进行聚类相比,具有较强的噪声抑制能力。图像分割的实验结果证明了该算法的可行性和有效性。  相似文献   

6.
H.L. Zou  Y.T. Lee   《Computer aided design》2006,38(12):1224-1232
This paper introduces a new algorithm for detecting skewed rotational symmetry in a 2D line drawing of a 3D polyhedral object by locating the possibly-multiple symmetry axes. The drawing is converted into an edge–vertex graph from which the algorithm finds the faces of the object and the sets of topologically symmetric edges and vertices. It then checks that each set of vertices is rotationally symmetric geometrically by analyzing the distribution of the vertices around the circumference of the best-fit ellipse. The object is rotationally skewed symmetric if the best fit ellipses of all the vertex sets have parallel axes, equal ratio of major radius to minor radius and centers on the axis of rotation. A tolerance is used in the calculation to allow for inaccuracies in the line drawings. A set of experimental results is presented showing that the algorithm works well.  相似文献   

7.
基于谱方法的无向赋权图剖分算法*   总被引:2,自引:0,他引:2  
在多水平方法初始剖分阶段提出了一种基于谱方法的无向赋权图剖分算法SPWUG,给出了基于Lanczos迭代计算Laplacian矩阵次小特征值及特征向量的实现细节。SPWUG算法借助Laplacian矩阵次小特征值对应的特征向量,刻画了节点间相对距离,将基于非赋权无向图的Laplacian谱理论在图的剖分应用方面扩展到无向赋权图上,实现了对最小图的初始剖分。基于ISPD98电路测试基准的实验表明,SPWUG算法取得了一定性能的改进。实验分析反映了在多水平方法中,最小图上的全局近似最优剖分可能是初始图的局部最  相似文献   

8.
采用迭代法拟合离散数据点时,数据点的参数化会同时影响逼近的效果和逼近的速度,为此,提出一种通过迭代调整优化控制顶点和数据点参数的方法,其收敛速度较快且拟合得到曲线更贴合控制点.首先,选取初始控制顶点,通过自适应的BFGS方法优化控制顶点得到拟合曲线;其次,保持控制顶点不变,利用步长加速法优化数据点对应的参数;最后,利用新参数值重新优化控制顶点并得到新的拟合曲线.数值实例表明,所提方法在迭代前期步骤中,收敛速度快于现有的基于控制顶点迭代法,且优化后的曲线更加逼近离散的数据点,拟合误差更小.  相似文献   

9.
蒙太奇网格融合   总被引:4,自引:2,他引:4       下载免费PDF全文
刘刚  金小刚  冯结青  彭群生 《软件学报》2003,14(8):1425-1432
三维物体融合是一种新的几何造型方法,它利用三维模型之间的剪贴操作从两个或多个现有的几何模型中光滑融合出新的几何模型.提出了一种基于局部调和映射的三维网格蒙太奇融合新方法.首先利用网格上的近似等距线算法来抽取出待融合区域,然后对两个待融合区域进行带内孔的调和映射参数化,最后通过拓扑合并和融合控制来实现网格的光滑融合.与原有的基于全局调和映射的融合方法相比,新方法的算法效率大幅度提升,求解时间不再随融合模型顶点数的增加而呈指数增长;减少了二维网格拓扑合并中奇异情况出现的概率,提高了算法的稳定性;被剪切网格的细节得到完整保留;消除了原算法对融合区域拓扑的限制.实验结果表明,此方法可以用来生成许多三维动画中的特殊夸张造型效果,在影视动画中具有应用价值.  相似文献   

10.
Piecewise linear skeletonization using principal curves   总被引:12,自引:0,他引:12  
Proposes an algorithm to find piecewise linear skeletons of handwritten characters by using principal curves. The development of the method was inspired by the apparent similarity between the definition of principal curves (smooth curves which pass through the "middle" of a cloud of points) and medial axes (smooth curves that run equidistantly from the contours of a character image). The central fitting-and-smoothing step of the algorithm is an extension of the polygonal line algorithm, which approximates principal curves of data sets by piecewise linear curves. The polygonal line algorithm is extended to find principal graphs and complemented with two steps specific to the task of skeletonization: an initialization method to capture the approximate topology of the character, and a collection of restructuring operations to improve the structural quality of the skeleton produced by the initialization method. An advantage of our approach over existing methods is that we optimize the skeleton graph by minimizing an intuitive and explicit objective function that captures the two competing criteria of smoothing the skeleton and fitting it closely to the pixels of the character image. We tested the algorithm on isolated handwritten digits and images of continuous handwriting. The results indicated that the proposed algorithm can find a smooth medial axis in the great majority of a wide variety of character templates and that it substantially improves the pixel-wise skeleton obtained by traditional thinning methods  相似文献   

11.
Coloring large graphs based on independent set extraction   总被引:1,自引:0,他引:1  
This paper presents an effective approach (EXTRACOL) to coloring large graphs. The proposed approach uses a preprocessing method to extract large independent sets from the graph and a memetic algorithm to color the residual graph. Each preprocessing application identifies, with a dedicated tabu search algorithm, a number of pairwise disjoint independent sets of a given size in order to maximize the vertices removed from the graph. We evaluate EXTRACOL on the 11 largest graphs (with 1000 to 4000 vertices) of the DIMACS challenge benchmarks and show improved results for four very difficult graphs (DSJC1000.9, C2000.5, C2000.9, C4000.5). The behavior of the proposed algorithm is also analyzed.  相似文献   

12.
In this paper, we address the problem of visual instance mining, which is to automatically discover frequently appearing visual instances from a large collection of images. We propose a scalable mining method by leveraging the graph structure with images as vertices. Different from most existing approaches that focus on either instance-level similarities or image-level context properties, our method captures both information. In the proposed framework, the instance-level information is integrated during the construction of a sparse instance graph based on the similarity between augmented local features, while the image-level context is explored with a greedy breadth-first search algorithm to discover clusters of visual instances from the graph. This framework can tackle the challenges brought by small visual instances, diverse intra-class variations, as well as noise in large-scale image databases. To further improve the robustness, we integrate two techniques into the basic framework. First, to better cope with the increasing noise of large databases, weak geometric consistency is adopted to efficiently combine the geometric information of local matches into the construction of the instance graph. Second, we propose the layout embedding algorithm, which leverages the algorithm originally designed for graph visualization to fully explore the image database structure. The proposed method was evaluated on four annotated data sets with different characteristics, and experimental results showed the superiority over state-of-the-art algorithms on all data sets. We also applied our framework on a one-million Flickr data set and proved its scalability.  相似文献   

13.
This paper presents a fuzzy clustering algorithm for the extraction of a smooth curve from unordered noisy data. In this method, the input data are first clustered into different regions using the fuzzy c-means algorithm and each region is represented by its cluster center. Neighboring cluster centers are linked to produce a graph according to the average class membership values. Loops in the graph are removed to form a curve according to spatial relations of the cluster centers. The input samples are then reclustered using the fuzzy c-means (FCM) algorithm, with the constraint that the curve must be smooth. The method has been tested with both open and closed curves with good results.  相似文献   

14.
15.
主成份分析对高维数据进行维数约简可有效提高聚类算法的性能,但这种方法容易丢失部分对聚类具有贡献的成份.为在维数约简的同时保留对聚类具有贡献的成份,提出一种维数约简与聚类交互进行的迭代算法.每次迭代可表示为约束优化问题,并可求解此优化问题的解析解,进而给出相应的迭代聚类算法,称之为基于约束主成份分析的本文聚类.在Reuter21578、WebKB文档集上的实验结果表明,文中方法与k-均值聚类、非负矩阵分解聚类和谱聚类相比具有较好的性能.  相似文献   

16.
针对Hadoop云平台下MapReduce计算模型在处理图数据时效率低下的问题,提出了一种类似谷歌Pregel的图数据处理计算框架--MyBSP.首先,分析了MapReduce的运行机制及不足之处;其次,阐述了MyBSP框架的结构、工作流程及主要接口;最后,在分析PageRank图处理算法原理的基础上,设计并实现了基于MyBSP框架的PageRank算法.实验结果表明,基于MyBSP框架的图数据处理算法与基于MapReduce的算法相比,迭代处理的性能提升了1.9~3倍.MyBSP算法的执行时间减少了67%,能够满足图数据高效处理的应用前景.  相似文献   

17.
高宏屹  张曦煌  王杰 《计算机工程》2021,47(2):60-68,76
针对当前链路预测算法无法有效保留网络图高阶结构特征的问题,提出一种生成对抗式分层网络表示学习算法。根据网络图的一阶邻近性和二阶邻近性,递归地对网络图进行边缘折叠和顶点合并,形成逐层规模变小的子网络图,使用Node2vec算法对规模最小的子网络图进行预处理,并将预处理结果输入到生成对抗式网络(EmbedGAN)模型中,学习得到最小子网络图顶点的低维向量表示,将其输入至上一层子网络的EmbedGAN模型中,作为上一层子网络图顶点的低维向量表示。按照该方法进行逐层向上回溯学习,直至学习到原始网络图,从而得到原始网络图顶点的低维向量表示。在多个不同领域的真实网络数据集上进行链路预测,实验结果表明,该算法的准确率与稳定性均优于LP、Katz和LINE算法。  相似文献   

18.
针对可达性查询保持图压缩(QPGC)算法存在冗余计算的问题,提出了一种高性能压缩策略。在求解顶点的祖先后代集阶段,针对普通图数据,提出一种基于拓扑排序的求解算法TSB,首先将图数据顶点拓扑排序,然后沿拓扑序列顺序(逆序)求解顶点的祖先(后代)集,避免了求解顺序不明确导致的冗余计算;针对最长路径较短的图数据,提出一种基于图聚合运算的求解算法AGGB,可在确定次数的聚合运算内完成顶点的祖先和后代集的求解。在求解可达性等价类阶段,提出一种分段统计剪枝算法PSP,先对祖先后代集分段统计,再比较统计值以实现粗匹配,剪除了部分不必要的精细匹配。实验结果表明,与QPGC算法相比:在祖先后代集求解阶段,TSB和AGGB在不同数据集上的性能平均提升94.22%和90.00%;在求解可达性等价类阶段,PSP算法在大部分数据集上性能提升超过70%;随着数据集的增大,TSB和AGGB配合PSP算法,性能提升了近28倍。理论分析和模拟实验表明,该策略与QPGC算法相比冗余计算更少、压缩速度更快。  相似文献   

19.
This paper presents a new solver for the exactly one-in-a-set Generalized Traveling Salesman Problem (GTSP). In the GTSP, we are given as input a complete directed graph with edge weights, along with a partition of the vertices into disjoint sets. The objective is to find a cycle (or tour) in the graph that visits each set exactly once and has minimum length. In this paper we present an effective algorithm for the GTSP based on adaptive large neighborhood search. The algorithm operates by repeatedly removing from, and inserting vertices in, the tour. We propose a general insertion mechanism that contains as special cases the well-known nearest, farthest and random insertion mechanisms. We provide extensive benchmarking results for our solver in comparison to the state-of-the-art on a wide range of existing and new problem libraries. We show that on the GTSP-LIB library, the proposed algorithm is competitive with the best known algorithms. On several other libraries, we show that given the same amount of time, the proposed solver finds higher quality solutions than existing approaches, particularly on harder instances that are non-metric and/or whose sets are not clustered.  相似文献   

20.
王真  曹立明 《计算机科学》2007,34(2):227-229
主曲线是一种用于数据压缩和特征提取的有效方法,是对主成分分析的非线性推广。由于主曲线与主成分的密切联系,主曲线生成算法通常以第二主成分线做初始值。然而实验发现第一主成分未必是算法初始化的最佳选择。本文将以HS算法和多边形算法为例,就初始值的选取对生成主曲线的影响做出分析并通过实验得出结论:HS算法以原点作初值效果较好,多边彤算法应根据数据点集的不同结构选择合适的初值。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号