首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 203 毫秒
1.
给出了一种表示和计算离散有限点集的射影与排列不变量的简单有效方法.该不变量在计算机视觉、模式识别中有重要应用.首先导出了射影直线上4个点的基于一种对称函数的射影与排列不变量,该不变量等于这4个点的某个原始交比值,具有计算量低,不丢失分辨力等优点.然后根据这个简单的对称函数,结合基本的多项式对称函数,推导出了平面上5个点的两个函数无关的射影与排列不变量,以及空间中6个点的3个函数无关的射影与排列不变量.  相似文献   

2.
平面碎片匹配算法的研究   总被引:2,自引:0,他引:2       下载免费PDF全文
在分析平面曲线的几何特性的基础上,提出了一种基于曲率等不变量的平面非规则边界曲线匹配的算法,该方法通过提取平面非规则曲线的角点和匹配角点来寻找初始匹配点,同时利用对应点的曲率相等或者等价的几何特性来匹配平面非规则曲线,并且在理论和实验上对方法的可行性进行了证明。  相似文献   

3.
在基于求交测试的立体图生成过程中,提高计算速度是立体图生成的重要问题。首先通过将空间三角形和空间射线分别映射成平面包围盒和二维点,达到降维的目的;然后,依据当前扫描点的坐标,在模型的所有三角面片中筛选出少数可能的相交三角形,只对筛选出的三角形进行三维求交测试。应用此方法,在不影响立体图效果的情况下,显著提高了基于求交测试的立体图生成速度。  相似文献   

4.
为提高STL模型切片效率,节省系统资源,提出STL模型分层邻接排序快速切片算法.采用邻接插入法建立三角形邻接关系,根据三角形各点坐标在切片方向上投影的最大值和最小值反求与此三角形相交的切片平面,并通过分析相邻2个三角形公共边与切片平面的位置关系,按邻接顺序建立交点链表.与已有的基于STL全模型拓扑信息提取的切片算法以及分组矩阵切片算法相比,文中算法不进行整体拓扑信息的提取和三角形的分组排序,而是将三角形顺序关系融合于交点链表中,从而达到节省系统资源、提高切片效率的目的.最后通过对壳体实体切片实例的分析,验证了该算法的可行性和高效性.  相似文献   

5.
点模式匹配是计算机视觉和模式识别领域中的一个重要问题 .通过研究 ,在假定待匹配的两个点模式中已知有三对点整体对应的前提下 ,基于射影坐标以及对投影变换和排序变换同时保持不变的 p2 -不变量等理论 ,通过定义一种广义距离 ,给出了一种求解透视变换下 ,点数不等的两个平面点模式匹配问题的新算法 .理论分析和仿真实验表明 ,该算法是快速、有效的  相似文献   

6.
三角形对的快速相交测试   总被引:2,自引:1,他引:1  
为提高碰撞检测的响应速度,提出了一种基于Ayellet算法的改进算法.该算法从代数的角度出发,首先快速排除掉三角形对不相交或共面的两种情况,然后分别计算一个三角形与另一个三角形所在平面的相交线段,最后检测这两条线段是否有公共点.如果有公共点则三角形对相交,反之则不相交.该算法也可以应用于类似的问题,如矩形对的相交测试,多边形对的相交测试.实验结果表明,该算法的速度优于改进前的算法.  相似文献   

7.
基于三角网生长算法和分治算法的思想,提出并实现了一个平面域散乱点的三角网格重构算法.算法首先利用分治算法的思想将散乱点集进行分割,然后在四个极值点确定初始三角形的基础上,基于边的扩展原则构造新的三角形,使网格不断向周围扩展直到所有的待扩展的边全部能构成三角形为止,最终构造出整个散乱点集的三角网格.  相似文献   

8.
基于局部重建的点云特征点提取   总被引:2,自引:0,他引:2  
为了有效地提取点云数据中的特征信息,针对采自分片光滑曲面的散乱点云数据,提出一种基于局部重建的鲁棒特征点提取方法.首先基于局部邻域的协方差分析计算每个数据点的特征度量,并通过阈值过滤获取初始特征点集合;然后在每个初始特征点的局部邻域内构建不跨越特征区域,以反映该点局部特征信息的三角形集合;再利用共享近邻算法对构造的三角形法向进行聚类,得到对应局部区域数据点的分类集合;最后对每一类点集拟合平面,通过判断该点是否同时落在多个平面来进行特征点提取.实验结果表明,该方法简单、稳定,对局部邻域选取的大小不敏感,具有一定的抗噪能力;能够在有效提取显著特征的同时,尽可能多地保留相对较弱的特征.  相似文献   

9.
由于对任意给定的平面点集通过Delaunay三角剖分进行处理可得到具有整体最优性的三角形网络,因而该方法得到了广泛的重视。但研究发现,常用的二维任意域Delaunay三角剖分算法^[1,2]是有缺陷的,它在构成Delaunay三角形候选点的选择过程中,可以使候选点出现“位置违约”的错误,即在候造节点链表中,虽然可出现依据算法的判据有条件成为Delaunay三角形的构成点,但采用该点构成Delaun  相似文献   

10.
3D不变量作为不随姿态、视点等成像条件变化而变化的特征参量,可以广泛应用于计算机视觉的多重领域.通过分析2D射影变换矩阵求解的多种可能性,由单纯基于点集对应的思路扩展到利用点集、线集、点、线组合等其它方法,从而拓宽了建立两射影平面对应关系的应用条件.由此提出了一种基于多种点线组合构造虚元素的方法,结合实元素和虚元素可以巧妙提取空间复杂结构下的多种3D不变量,以用于目标识别和描述当中.实验结果验证了方法的有效性。  相似文献   

11.
In this paper, we derive new geometric invariants for structured 3D points and lines from single image under projective transform, and we propose a novel model-based 3D object recognition algorithm using them. Based on the matrix representation of the transformation between space features (points and lines) and the corresponding projected image features, new geometric invariants are derived via the determinant ratio technique. First, an invariant for six points on two adjacent planes is derived, which is shown to be equivalent to Zhu's result [1], but in simpler formulation. Then, two new geometric invariants for structured lines are investigated: one for five lines on two adjacent planes and the other for six lines on four planes. By using the derived invariants, a novel 3D object recognition algorithm is developed, in which a hashing technique with thresholds and multiple invariants for a model are employed to overcome the over-invariant and false alarm problems. Simulation results on real images show that the derived invariants remain stable even in a noisy environment, and the proposed 3D object recognition algorithm is quite robust and accurate.  相似文献   

12.
Functions of moments of 2D images that are invariant under some changes are important in image analysis and pattern recognition. One of the most basic changes to a 2D image is geometric change. Two images of the same plane taken from different viewpoints are related by a projective transformation. Unfortunately, it is well known that geometric moment invariants for projective transformations do not exist in general. Yet if we generalize the standard definition of the geometric moments and utilize some additional information from the images, certain type of projective invariants of 2D images can be derived. This paper first defines co-moment as a moment-like function of image that contains two reference points. Then a set of functions of co-moments that is invariant under general projective transformations is derived. The invariants are simple and in explicit form. Experimental results validated the mathematical derivations.  相似文献   

13.
There are three projective invariants of a set of six points in general position in space. It is well known that these invariants cannot be recovered from one image, however an invariant relationship does exist between space invariants and image invariants. This invariant relationship is first derived for a single image. Then this invariant relationship is used to derive the space invariants, when multiple images are available. This paper establishes that the minimum number of images for computing these invariants is three, and the computation of invariants of six points from three images can have as many as three solutions. Algorithms are presented for computing these invariants in closed form. The accuracy and stability with respect to image noise, selection of the triplets of images and distance between viewing positions are studied both through real and simulated images. Applications of these invariants are also presented. Both the results of Faugeras (1992) and Hartley et al. (1992) for projective reconstruction and Sturm's method (1869) for epipolar geometry determination from two uncalibrated images with at least seven points are extended to the case of three uncalibrated images with only six points  相似文献   

14.
The variation, with respect to view, of 2D features defined for projections of 3D point sets and line segments is studied. It is established that general-case view-invariants do not exist for any number of points, given true perspective, weak perspective, or orthographic projection models. Feature variation under the weak perspective approximation is then addressed. Though there are no general-case weak-perspective invariants, there are special-case invariants of practical importance. Those cited in the literature are derived from linear dependence relations and the invariance of this type of relation to linear transformation. The variation with respect to view is studied for an important set of 2D line segment features: the relative orientation, size, and position of one line segment with respect to another. The analysis includes an evaluation criterion for feature utility in terms of view-variation. This relationship is a function of both the feature and the particular configuration of 3D line segments. The use of this information in objection recognition is demonstrated for difficult discrimination tasks  相似文献   

15.
In this paper, we are addressing the exact computation of the Delaunay graph (or quasi-triangulation) and the Voronoi diagram of spheres using Wu’s algorithm. Our main contributions are first a methodology for automated derivation of invariants of the Delaunay empty circumsphere predicate for spheres and the Voronoi vertex of four spheres, then the application of this methodology to get all geometrical invariants that intervene in this problem and the exact computation of the Delaunay graph and the Voronoi diagram of spheres. To the best of our knowledge, there does not exist a comprehensive treatment of the exact computation with geometrical invariants of the Delaunay graph and the Voronoi diagram of spheres. Starting from the system of equations defining the zero-dimensional algebraic set of the problem, we are applying Wu’s algorithm to transform the initial system into an equivalent Wu characteristic (triangular) set. In the corresponding system of algebraic equations, in each polynomial (except the first one), the variable with higher order from the preceding polynomial has been eliminated (by pseudo-remainder computations) and the last polynomial we obtain is a polynomial of a single variable. By regrouping all the formal coefficients for each monomial in each polynomial, we get polynomials that are invariants for the given problem. We rewrite the original system by replacing the invariant polynomials by new formal coefficients. We repeat the process until all the algebraic relationships (syzygies) between the invariants have been found by applying Wu’s algorithm on the invariants. Finally, we present an incremental algorithm for the construction of Voronoi diagrams and Delaunay graphs of spheres in 3D and its application to Geodesy.  相似文献   

16.
城市道路中常设置具有3D效果的平面路障或标志物,其具有高度的立体性和真实性,导致行人和辅助驾驶系统误判而造成严重事故,因此需要对道路立体目标进行识别,以获得真实路面情况。常见的射影不变量如交比是基于共面五点计算的,存在局限性,论文提出一种基于空间点元素的几何不变量计算方法,把空间元素的共点和共线用具有物理意义的量来表示,通过合理搭建把空间元素巧妙转化为共面关系。该几何不变量只依赖点的空间坐标,与投影视角、摄像机参数等无关。在三维造型软件SolidWorks上进行模拟实验,并用该计算方法对真实道路上的真假3D物体进行了测验,最终实测实验验证依据空间六点计算得到的几何量可用于道路立体目标的识别。  相似文献   

17.
This paper presents the results of a study of projective invariants and their applications in image analysis and object recognition. The familiar cross-ratio theorem, relating collinear points in the plane to the projections through a point onto a line, provides a starting point for their investigation. Methods are introduced in two dimensions for extending the cross-ratio theorem to relate noncollinear object points to their projections on multiple image lines. The development is further extended to three dimensions. It is well known that, for a set of points distributed in three dimensions, stereo pairs of images can be made and relative distances of the points from the film plane computed from measurements of the disparity of the image points in the stereo pair. These computations require knowledge of the effective focal length and baseline of the imaging system. It is less obvious, but true, that invariant metric relationships among the object points can be derived from measured relationships among the image points. These relationships are a generalization into three dimensions of the invariant cross-ratio of distances between points on a line. In three dimensions the invariants are cross-ratios of areas and volumes defined by the object points. These invariant relationships, which are independent of the parameters of the imaging system, are derived and demonstrated with examples.  相似文献   

18.
In this paper, we propose a new set of 2D and 3D rotation invariants based on orthogonal radial Meixner moments. We also present a theoretical mathematics to derive them. Hence, this paper introduces in the first case a new 2D radial Meixner moments based on polar representation of an object by a one-dimensional orthogonal discrete Meixner polynomials and a circular function. In the second case, we present a new 3D radial Meixner moments using a spherical representation of volumetric image by a one-dimensional orthogonal discrete Meixner polynomials and a spherical function. Further 2D and 3D rotational invariants are derived from the proposed 2D and 3D radial Meixner moments respectively. In order to prove the proposed approach, three issues are resolved mainly image reconstruction, rotational invariance and pattern recognition. The result of experiments prove that the Meixner moments have done better than the Krawtchouk moments with and without nose. Simultaneously, the reconstructed volumetric image converges quickly to the original image using 2D and 3D radial Meixner moments and the test images are clearly recognized from a set of images that are available in a PSB database.  相似文献   

19.
在应用医学图像诊断病情以及放射治疗计划制定中,器官及病变组织的几何测量具有重要的作用。文中基于医学三维体数据研究了空间中任意两点间的距离测量,基于面绘制的三角面片研究了组织器官表面积测量,基于体绘制研究了组织器官的体积测量,在实验中取得了较高的测量精度。  相似文献   

20.
We present the construction of combined blur and rotation moment invariants in arbitrary number of dimensions. Moment invariants to convolution with an arbitrary centrosymmetric filter are derived first, and then their rotationally invariant forms are found by means of group representation theory to achieve the desired combined invariance. Several examples of the invariants are calculated explicitly to illustrate the proposed procedure. Their invariance, robustness, and capability of using in template matching and in image registration are demonstrated on 3D MRI data and 2D indoor images.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号