首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 218 毫秒
1.
提出一种在图像投影匹配基础上进行的目标姿态测量新方法,避免了传统姿态测量中左右像面目标的特征匹配或灰度匹配.二维投影相关法是基于二维投影的灰度相关匹配算法,主要利用匹配图像相邻像素的灰度值的大小关系应该相同的原理进行图像匹配.在此基础上采用双目视觉测量空间轴对称目标姿态,应用面面交会法获取轴对称目标在像面的轴线,进行三维姿态测量.模拟实验结果表明:该方法姿态角测量误差小于0.2°;且计算速度快,结果稳定,能够满足处理的要求.  相似文献   

2.
回转体目标空间三维姿态测量方法研究   总被引:4,自引:2,他引:2  
由于回转体目标表面灰度值变化较小和双目视觉视差的原因,在双目视觉测量中难以采用灰度和特征匹配.提出了一种双目视觉测量空间回转体目标姿态的新方法,避免了传统姿态测量中左右像面目标特征点的特征匹配或灰度匹配.采用基于空间矩的回转体像面目标直线边缘亚像素提取技术,获得回转体目标在每个像面上的两条母线的亚像素直线方程.求出两条母线与其光心所成平面的两个角平分平面,这两个平面相交就可以得到回转体目标的轴线的方向矢量.模拟实验结果表明,该方法姿态角测量误差小于0.5°,结果稳定,能够满足回转体目标空间三维姿态的高精度测量的要求.  相似文献   

3.
飞行目标落地姿态视觉测量方法研究   总被引:2,自引:0,他引:2  
飞行目标落地姿态是评定飞行目标系统性能的重要技术指标.为了测量飞行目标落地姿态,采用放置在落弹点区域附近的凝视等待式双目视觉摄像机,高速摄影交汇测量飞行目标的落地姿态.根据目标在摄像机像面成像特点,将目标进行分类,并采用了不同的姿态测量方案.模拟实验与理论分析表明:该姿态测量方法正确、可靠,大目标和过渡目标姿态角测量误差小于1°,为飞行目标的姿态测量提供了一条新的便捷可靠的途径.  相似文献   

4.
单轴红外姿态测量系统在测量小型无人机姿态角时存在盲区,通过增加一组与原轴线相垂直的红外传感器,形成双轴姿态测量系统,对盲区进行补偿。两组红外传感器一定有一组位于倾角可测区域,通过测试目标轴的输出温差的大小,判断位于可测区域的轴线,可实现对目标轴姿态角的解算。首次采用数字式输出红外热电堆传感器设计实现了180°无盲区的姿态角测量模块,经过测试,其静态误差小于2°。  相似文献   

5.
基于识别空间目标的需要,研究了轴对称卫星目标的识别问题,提出了一种多视点下基于主轴夹角的检索匹配识别方法.首先使用经纬度划分观察球的方法得到二维图像模型,用于描述不同视点下的目标图像.然后通过分析轴对称图像主轴与X轴夹角的特点来检索模型库.最后采用基于形状的特征向量,用归一化的模板匹配法进行目标识别.实验结果表明了上述方法的有效性.  相似文献   

6.
摘 要:针对汽车钣金件中匹配孔的半径差及轴线重合度人工检查效率低、易遗漏等问题, 提出了钣金件中匹配孔的半径差及轴线重合度自动检查方法。首先将获得的 B-Rep 三维模型以 面壳封闭的方法拓扑分解为面基本单元;其次根据圆柱面具有 2 条半圆弧线和 2 条直线的特征 对圆柱面进行筛选,并以其为基本单元对圆孔和槽孔进行提取;然后分别获取圆孔和槽孔的半 径及轴线,并利用点到点和点到线距离方法,根据孔匹配原理对匹配孔进行检查分析;最后对 半径差不满足要求和轴线不重合的孔对进行标注。借助 CATIA 的 CAA 开发平台对相关算法进 行了系统设计,并验证方法的可行性。实例检查结果表明,该方法能够高效、准确的对钣金件 中匹配孔的半径差及轴线重合度自动检查。  相似文献   

7.
李水平  彭晓明 《计算机应用》2014,34(5):1453-1457
为了实现场景中三维目标与模型之间的匹配,提出了一种结合三维几何形状信息和二维纹理的三维目标匹配方法。首先提取场景中深度图像的尺度不变特征变换(SIFT)特征,用SIFT算法与三维模型重建时所用到的一系列2.5维深度图像进行一一匹配,找到与场景中目标姿态最为相似的深度图像,提取此深度图像的三维几何形状特征与模型进行匹配,实现模型的初始化,即将模型重置到与场景目标相接近的姿态。最后用融合二维纹理信息的迭代就近点(ICP)算法实现场景中目标与模型之间的匹配,从而得到场景中三维目标的准确姿态。实验结果验证了方法的可行性与精确性。  相似文献   

8.
提出一种从序列图像中自动跟踪测量目标位置和姿态参数的方法。利用单应性原理和上一帧图像中目标位姿参数的测量结果,将目标上的典型平面区域重建为同时含有几何信息和亮度信息的平面区域模板;然后根据投影方程,将该模板在一定的位置姿态参数下进行投影仿真成像,当模板的仿真成像结果与当前帧图像中的该平面区域达到最佳匹配时,认为此时仿真成像的位置姿态参数即为当前帧图像的测量结果。通过对该匹配问题进行最优化建模和求解,实现了序列图像中目标位姿参数的自动测量。实验结果表明,本文方法能够在序列图像中对含有典型平面区域的目标实现较高精度的自动跟踪测量。  相似文献   

9.
机器视觉目标匹配中,匹配相似性测量应用特征列表相关算法计算,可有效减少处理时间,并且匹配具有较高的峰值系数和峰值信噪比,可以清晰识别目标。所提出的基于梯度特征列表的机器视觉目标匹配方法,采用梯度特征列表描述图像,对特征像素点非均匀采样,匹配相似测量基于特征像素点梯度归一化互相关,可有效改善特征列表算法性能。  相似文献   

10.
舰船检测在各个领域都有广泛的应用价值。针对经典模板匹配算法在模板与实际场景图像存在灰度偏差和角度偏差时检测准确度受限的问题,提出一种基于仿真模板特征匹配的舰船目标检测方法。基于目标的先验三维数字模型仿真生成目标模板,以适应实际场景的多角度变化;采用基于SuperGlue算法的目标匹配方法,并定义良好匹配的判别表达式;同时为提高姿态角的准确性,采用基于姿态角邻域采样的由粗粒度到细粒度的姿态角调整策略,进而确保好的匹配效果。多组不同观测角度下的匹配实验结果的mIoU达到0.76,目标整体检测准确性较好,相较采用固定正射模板匹配的方式,其准确性和稳定性更佳。  相似文献   

11.
In this article we present the integration of 3-D shape knowledge into a variational model for level set based image segmentation and contour based 3-D pose tracking. Given the surface model of an object that is visible in the image of one or multiple cameras calibrated to the same world coordinate system, the object contour extracted by the segmentation method is applied to estimate the 3-D pose parameters of the object. Vice-versa, the surface model projected to the image plane helps in a top-down manner to improve the extraction of the contour. While common alternative segmentation approaches, which integrate 2-D shape knowledge, face the problem that an object can look very differently from various viewpoints, a 3-D free form model ensures that for each view the model can fit the data in the image very well. Moreover, one additionally solves the problem of determining the object’s pose in 3-D space. The performance is demonstrated by numerous experiments with a monocular and a stereo camera system.  相似文献   

12.
A method of using spatial coherence in image generation by raytracing is presented. The idea is to trace a set of rays in parallel. This is carried out by space sweep. Space sweep consists of moving a plane through the object space. The rays intersected by the plane are organized into a dynamic data structure R for range searching. When an object is met by the sweeping plane, those rays intersecting the object are found by a range search with the object in R. Exact complexity bounds are given for this algorithm, as well as details to allow practical application of this approach in image operation.  相似文献   

13.
This paper presents a novel vision-based global localization that uses hybrid maps of objects and spatial layouts. We model indoor environments with a stereo camera using the following visual cues: local invariant features for object recognition and their 3D positions for object pose estimation. We also use the depth information at the horizontal centerline of image where the optical axis passes through, which is similar to the data from a 2D laser range finder. This allows us to build our topological node that is composed of a horizontal depth map and an object location map. The horizontal depth map describes the explicit spatial layout of each local space and provides metric information to compute the spatial relationships between adjacent spaces, while the object location map contains the pose information of objects found in each local space and the visual features for object recognition. Based on this map representation, we suggest a coarse-to-fine strategy for global localization. The coarse pose is estimated by means of object recognition and SVD-based point cloud fitting, and then is refined by stochastic scan matching. Experimental results show that our approaches can be used for an effective vision-based map representation as well as for global localization methods.  相似文献   

14.
The complex EGI: a new representation for 3-D pose determination   总被引:1,自引:0,他引:1  
The complex extended Gaussian image (CEGI), a 3D object representation that can be used to determine the pose of an object, is described. In this representation, the weight associated with each outward surface normal is a complex weight. The normal distance of the surface from the predefined origin is encoded as the phase of the weight, whereas the magnitude of the weight is the visible area of the surface. This approach decouples the orientation and translation determination into two distinct least-squares problems. The justification for using such a scheme is twofold: it not only allows the pose of the object to be extracted, but it also distinguishes a convex object from a nonconvex object having the same EGI representation. The CEGI scheme has the advantage of not requiring explicit spatial object-model surface correspondence in determining object orientation and translation. Experiments involving synthetic data of two polyhedral and two smooth objects are presented to illustrate the feasibility of this method  相似文献   

15.
Visual learning and recognition of 3-d objects from appearance   总被引:33,自引:9,他引:24  
The problem of automatically learning object models for recognition and pose estimation is addressed. In contrast to the traditional approach, the recognition problem is formulated as one of matching appearance rather than shape. The appearance of an object in a two-dimensional image depends on its shape, reflectance properties, pose in the scene, and the illumination conditions. While shape and reflectance are intrinsic properties and constant for a rigid object, pose and illumination vary from scene to scene. A compact representation of object appearance is proposed that is parametrized by pose and illumination. For each object of interest, a large set of images is obtained by automatically varying pose and illumination. This image set is compressed to obtain a low-dimensional subspace, called the eigenspace, in which the object is represented as a manifold. Given an unknown input image, the recognition system projects the image to eigenspace. The object is recognized based on the manifold it lies on. The exact position of the projection on the manifold determines the object's pose in the image.A variety of experiments are conducted using objects with complex appearance characteristics. The performance of the recognition and pose estimation algorithms is studied using over a thousand input images of sample objects. Sensitivity of recognition to the number of eigenspace dimensions and the number of learning samples is analyzed. For the objects used, appearance representation in eigenspaces with less than 20 dimensions produces accurate recognition results with an average pose estimation error of about 1.0 degree. A near real-time recognition system with 20 complex objects in the database has been developed. The paper is concluded with a discussion on various issues related to the proposed learning and recognition methodology.  相似文献   

16.
17.
This article addresses a problem of moving object detection by combining two kinds of segmentation schemes: temporal and spatial. It has been found that consideration of a global thresholding approach for temporal segmentation, where the threshold value is obtained by considering the histogram of the difference image corresponding to two frames, does not produce good result for moving object detection. This is due to the fact that the pixels in the lower end of the histogram are not identified as changed pixels (but they actually correspond to the changed regions). Hence there is an effect on object background classification. In this article, we propose a local histogram thresholding scheme to segment the difference image by dividing it into a number of small non-overlapping regions/windows and thresholding each window separately. The window/block size is determined by measuring the entropy content of it. The segmented regions from each window are combined to find the (entire) segmented image. This thresholded difference image is called the change detection mask (CDM) and represent the changed regions corresponding to the moving objects in the given image frame. The difference image is generated by considering the label information of the pixels from the spatially segmented output of two image frames. We have used a Markov Random Field (MRF) model for image modeling and the maximum a posteriori probability (MAP) estimation (for spatial segmentation) is done by a combination of simulated annealing (SA) and iterated conditional mode (ICM) algorithms. It has been observed that the entropy based adaptive window selection scheme yields better results for moving object detection with less effect on object background (mis) classification. The effectiveness of the proposed scheme is successfully tested over three video sequences.  相似文献   

18.
Most industrial grippers now in use are two-fingered. Among them the parallel-jaw gripper is the simplest. It can partially remove the pose uncertainty of an object through grasping, such as the orientation uncertainty. This paper addresses a new type of grippers with the finger configuration of four circles instead of two parallel lines. It has a number of important advantages. Especially, it achieves form-closure and confines the object to a locally unique pose, so as to remove the pose uncertainty completely. It allows the gripped object to reach this pose freely without loss of required friction in the direction perpendicular to the grasping plane. More information can be acquired for identifying the object and its grasp mode. As a result the identification can be performed at one grasp. The key parameter of a symmetric four-pin gripper is the distance (span) between two pin centers on each finger, which depends upon the object shape and impacts the closure property, Based on a new approach to the grasp geometry, selection and limitations of the span are illustrated.  相似文献   

19.
Medical tomographic images are formed by the intersection of the image plane and an object. As the image plane changes, different parts of the object come in view or drop out of view. However, for small changes of the image plane, most parts continue to remain visible and their qualitative embedding in the image remains similar. Therefore, similarity of part embeddings can be used to infer similarity of image planes. Part embeddings are useful features for other vision applications as well. In view of this, a spatial relation called “arrangement” is proposed to describe part embeddings. The relation describes how each part is surrounded by its neighbors. Further, a metric for arrangements is formulated by expressing arrangements in terms of the Voronoi diagram of the parts. Arrangements and their metric are used to retrieve images by image plane similarity in a cardiac magnetic resonance image database. Experiments with the database are reported which (1) validate the observation that similarity of image planes can be inferred from similarity of part embeddings, and (2) compare the performance of arrangement based image retrieval with that of expert radiologists  相似文献   

20.
In this paper, we introduce a method to estimate the object’s pose from multiple cameras. We focus on direct estimation of the 3D object pose from 2D image sequences. Scale-Invariant Feature Transform (SIFT) is used to extract corresponding feature points from adjacent images in the video sequence. We first demonstrate that centralized pose estimation from the collection of corresponding feature points in the 2D images from all cameras can be obtained as a solution to a generalized Sylvester’s equation. We subsequently derive a distributed solution to pose estimation from multiple cameras and show that it is equivalent to the solution of the centralized pose estimation based on Sylvester’s equation. Specifically, we rely on collaboration among the multiple cameras to provide an iterative refinement of the independent solution to pose estimation obtained for each camera based on Sylvester’s equation. The proposed approach to pose estimation from multiple cameras relies on all of the information available from all cameras to obtain an estimate at each camera even when the image features are not visible to some of the cameras. The resulting pose estimation technique is therefore robust to occlusion and sensor errors from specific camera views. Moreover, the proposed approach does not require matching feature points among images from different camera views nor does it demand reconstruction of 3D points. Furthermore, the computational complexity of the proposed solution grows linearly with the number of cameras. Finally, computer simulation experiments demonstrate the accuracy and speed of our approach to pose estimation from multiple cameras.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号