首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 203 毫秒
1.
针对小型无人飞行器跟踪目标的问题,提出了一种基于双目视觉和Camshift算法的无人飞行器目标跟踪以及定位算法。双目相机得到的左右图像通过Camshift算法处理可得到目标中心特征点,对目标中心特征点进行三维重建,得到机体坐标系下无人飞行器与目标间的相对位置和偏航角,应用卡尔曼滤波算法对测量值进行了优化,将所得估计值作为飞行控制系统的反馈输入值,实现了无人飞行器自主跟踪飞行。结果表明所提算法误差较小,具有较高的稳定性与精确性。  相似文献   

2.
于乃功  黄灿  林佳 《计算机测量与控制》2012,20(10):2654-2656,2660
为实现具有单目视觉的双足机器人对目标物体的定位测距,根据针孔成像和几何坐标变换原理,提出一种基于单目视觉系统自身固定参数计算目标物体深度信息的几何测距方法;该方法利用双足机器人自带的单目视觉系统的固定参数,通过几何映射关系求解目标物体在三维空间内的坐标位置及相对于机器人的距离,克服了由于外界环境变化以及对参照物识别时所产生的误差影响测距定位精度的问题,从而实现基于单目视觉的目标物体精确跟踪定位;并对所提出方法进行了实验,得到比理论结果误差较小的实验数据,证明了方法的可行性。  相似文献   

3.
为解决针对三维立体屏幕的定位问题,提出了基于双目视觉与目标识别、拟合的屏幕定位算法。首先利用边缘检测算法提取屏幕轮廓,获取屏幕在二维图像中的位置;其次利用双目视觉算法对识别出的屏幕进行深度数据采集,利用最小二乘法对采集的屏幕空间数据进行空间拟合得到屏幕的空间方程;最后计算屏幕顶点在三维空间中的位置从而确定屏幕范围,至此完成在三维空间中对屏幕的定位。实验结果表明,该方法计算得到的屏幕位置具有较高准确度。  相似文献   

4.
视觉目标检测旨在定位和识别图像中存在的物体,属于计算机视觉领域的经典任务之一,也是许多计算机视觉任务的前提与基础,在自动驾驶、视频监控等领域具有重要的应用价值,受到研究人员的广泛关注。随着深度学习技术的飞速发展,目标检测取得了巨大的进展。首先,本文总结了深度目标检测在训练和测试过程中的基本流程。训练阶段包括数据预处理、检测网络、标签分配与损失函数计算等过程,测试阶段使用经过训练的检测器生成检测结果并对检测结果进行后处理。然后,回顾基于单目相机的视觉目标检测方法,主要包括基于锚点框的方法、无锚点框的方法和端到端预测的方法等。同时,总结了目标检测中一些常见的子模块设计方法。在基于单目相机的视觉目标检测方法之后,介绍了基于双目相机的视觉目标检测方法。在此基础上,分别对比了单目目标检测和双目目标检测的国内外研究进展情况,并展望了视觉目标检测技术发展趋势。通过总结和分析,希望能够为相关研究人员进行视觉目标检测相关研究提供参考。  相似文献   

5.
计算机视觉是一门新兴的发展迅速的学科,计算机视觉的研究已经历了从实验室走向实际应用的发展阶段.由于视觉信息容量大,在实际应用中直观有效,所以运用视觉来寻找和确定目标的方位是机器人发展中一个很重要的方法,近年来它广泛应用于工业自动化装配领域中,同时对视觉系统的要求也越来越高.文中介绍并分析了当前国内外基于视觉的几种主要的目标定位方法在实际中的应用,例如移动机器人自主导航定位系统、手眼立体视觉系统等.  相似文献   

6.
采用Eye-to-hand方式固定安装摄像头,实现单目视觉引导的五自由度机械手定位方法.搭建了以STM32F4系列芯片为主控芯片的控制平台,运用蒙特-卡罗数值分析法求解关节机械手工作空间云点图,进行三次多项式插值.最后以三维积木道具作为实验对象,利用改进后的Harris角点检测算法结合人工辅助标识点,在多种规格积木中提...  相似文献   

7.
单目视觉跟踪定位、双目视觉跟踪定位和多目视觉传感器跟踪定位是当前的计算机主要的视觉定位跟踪方式,但是由于利用双目视觉跟踪定位和多目视觉传感器中存在着视场小、系统结构庞大、立体匹配难等缺陷,在很多工业制造场合已逐渐被标定步骤少、结构简单的单目视觉所代替。本实验通过单目视觉方式实现了对运动目标跟踪定位,并设计了基于单摄像头的运动目标跟踪定位系统,采用CamShift算法通过目标颜色信息选取对目标进行识别、提取以及检测,通过基于邻域线性搜索与卡尔曼滤波器相结合的跟踪算法,准确实现运动目标的识别和定位,并进行了一系列相应的理论实验验证与分析,给出了最终的实验结果。  相似文献   

8.
基于双目立体视觉的番茄识别与定位技术   总被引:8,自引:0,他引:8  
郑小东  赵杰文  刘木华 《计算机工程》2004,30(22):155-156,171
根据颜色特征利用阈值自动设定的方法对图像进行分割,自动、快速识别红色番茄;采用形心匹配取代常规的特征点选择和匹配方法,对双目立体成像测距公式进行了修正,经过验证,当工作距离小于500mm时,距离误差可以控制在±10mm以内。  相似文献   

9.
基于双目视觉的运动目标跟踪与测量   总被引:4,自引:1,他引:4       下载免费PDF全文
在研究融合运动目标位置预测的Mean Shift跟踪算法和双目立体视觉中的空间点定位算法的基础上,基于双目视觉设计了双目立体视觉运动目标跟踪和测量系统,并在所进行的跟踪与测量实验中,提取了运动目标质心的三维坐标序列,实现了对目标深度和速度的测量。  相似文献   

10.
自主目标识别与定位问题是智能化林业机器人工作的重要基础.以林业环境中树干识别及定位为目标,设计一种基于双目视觉的数字视频实时处理系统硬件平台.使用双目摄像头采集图像,并对采集信息进行三维信息计算,输出目标定位与测距结果.实验结果表明,该硬件平台可以完成图像采集及处理功能,达到预期的实验效果.  相似文献   

11.
This paper presents a novel vision-based global localization that uses hybrid maps of objects and spatial layouts. We model indoor environments with a stereo camera using the following visual cues: local invariant features for object recognition and their 3D positions for object pose estimation. We also use the depth information at the horizontal centerline of image where the optical axis passes through, which is similar to the data from a 2D laser range finder. This allows us to build our topological node that is composed of a horizontal depth map and an object location map. The horizontal depth map describes the explicit spatial layout of each local space and provides metric information to compute the spatial relationships between adjacent spaces, while the object location map contains the pose information of objects found in each local space and the visual features for object recognition. Based on this map representation, we suggest a coarse-to-fine strategy for global localization. The coarse pose is estimated by means of object recognition and SVD-based point cloud fitting, and then is refined by stochastic scan matching. Experimental results show that our approaches can be used for an effective vision-based map representation as well as for global localization methods.  相似文献   

12.
An autonomous mobile robot must have the ability to navigate in an unknown environment. The simultaneous localization and map building (SLAM) problem have relation to this autonomous ability. Vision sensors are attractive equipment for an autonomous mobile robot because they are information-rich and rarely have restrictions on various applications. However, many vision based SLAM methods using a general pin-hole camera suffer from variation in illumination and occlusion, because they mostly extract corner points for the feature map. Moreover, due to the narrow field of view of the pin-hole camera, they are not adequate for a high speed camera motion. To solve these problems, this paper presents a new SLAM method which uses vertical lines extracted from an omni-directional camera image and horizontal lines from the range sensor data. Due to the large field of view of the omni-directional camera, features remain in the image for enough time to estimate the pose of the robot and the features more accurately. Furthermore, since the proposed SLAM does not use corner points but the lines as the features, it reduces the effect of illumination and partial occlusion. Moreover, we use not only the lines at corners of wall but also many other vertical lines at doors, columns and the information panels on the wall which cannot be extracted by a range sensor. Finally, since we use the horizontal lines to estimate the positions of the vertical line features, we do not require any camera calibration. Experimental work based on MORIS, our mobile robot test bed, moving at a human’s pace in the real indoor environment verifies the efficacy of this approach.  相似文献   

13.
Stereo vision specific models for particle filter-based SLAM   总被引:1,自引:0,他引:1  
F.A.  J.L.  J.   《Robotics and Autonomous Systems》2009,57(9):955-970
  相似文献   

14.
While the most accurate solution to off-line structure from motion (SFM) problems is undoubtedly to extract as much correspondence information as possible and perform batch optimisation, sequential methods suitable for live video streams must approximate this to fit within fixed computational bounds. Two quite different approaches to real-time SFM – also called visual SLAM (simultaneous localisation and mapping) – have proven successful, but they sparsify the problem in different ways. Filtering methods marginalise out past poses and summarise the information gained over time with a probability distribution. Keyframe methods retain the optimisation approach of global bundle adjustment, but computationally must select only a small number of past frames to process.  相似文献   

15.
This paper presents a new method for three dimensional object tracking by fusing information from stereo vision and stereo audio. From the audio data, directional information about an object is extracted by the Generalized Cross Correlation (GCC) and the object’s position in the video data is detected using the Continuously Adaptive Mean shift (CAMshift) method. The obtained localization estimates combined with confidence measurements are then fused to track an object utilizing Particle Swarm Optimization (PSO). In our approach the particles move in the 3D space and iteratively evaluate their current position with regard to the localization estimates of the audio and video module and their confidences, which facilitates the direct determination of the object’s three dimensional position. This technique has low computational complexity and its tracking performance is independent of any kind of model, statistics, or assumptions, contrary to classical methods. The introduction of confidence measurements further increases the robustness and reliability of the entire tracking system and allows an adaptive and dynamical information fusion of heterogenous sensor information.  相似文献   

16.
目标位姿测量中的三维视觉方法   总被引:26,自引:0,他引:26  
要测量出一组特征点分别在两个空间坐标系下的坐标,就可以求解两个空间目标间的位姿关系,实现上述目标位姿测量方法的前提条件是要保证该组特征点在不同坐标系下,其位置关系相同,但计算误差的存在却破坏了这种固定的位置关系,为此,提出了两种基于模型的三维视觉方法-基于模型的单目视觉和基于模型的双目视觉,前者从视觉计算的物理意义入手,通过简单的约束迭代求解实现模型约束,后者则将简单的约束最小二乘法和基于模型的单目视觉方法融合在一起来实现模型约束,引入模型约束后,单目视觉方法可以达到很高的测量精度,而基于模型的双目视觉较传统的无模型立体视觉方法位移精度提高有限,但姿态精度提高很多。  相似文献   

17.
针对机器人采用示教方式码垛时所遇到的误抓取等问题,提出了一种基于双目视觉引导机器人码垛的定位方法。完成了双目标定、手眼标定以及极线校正工作,利用灰度变换与图像滤波相融合的算法对图像进行预处理,提高图像的质量。将图像的灰度级分布与立体匹配算法相结合,提高立体匹配的精度,获得更好的视差图像,对视差图像进行处理得到了码垛产品区域的中心点,然后结合平行双目系统和手眼标定的结果引导机器人对产品进行定位、码垛。实验表明:该方法可实现对产品区域中心点的精确定位,获得其坐标值,引导机器人码垛。  相似文献   

18.
3-D Depth Reconstruction from a Single Still Image   总被引:4,自引:0,他引:4  
We consider the task of 3-d depth estimation from a single still image. We take a supervised learning approach to this problem, in which we begin by collecting a training set of monocular images (of unstructured indoor and outdoor environments which include forests, sidewalks, trees, buildings, etc.) and their corresponding ground-truth depthmaps. Then, we apply supervised learning to predict the value of the depthmap as a function of the image. Depth estimation is a challenging problem, since local features alone are insufficient to estimate depth at a point, and one needs to consider the global context of the image. Our model uses a hierarchical, multiscale Markov Random Field (MRF) that incorporates multiscale local- and global-image features, and models the depths and the relation between depths at different points in the image. We show that, even on unstructured scenes, our algorithm is frequently able to recover fairly accurate depthmaps. We further propose a model that incorporates both monocular cues and stereo (triangulation) cues, to obtain significantly more accurate depth estimates than is possible using either monocular or stereo cues alone.  相似文献   

19.
基于单目视觉的机器人Monte Carlo自定位方法   总被引:1,自引:0,他引:1  
针对单目视觉机器人定位问题,提出一种基于改进的尺度不变特征变换(SIFT)的Monte Carlo自定位方法.应用改进的SIFT方法提取特征,既能保证对图像光强变化、尺度缩放、三维视角和噪声具有不变性,又能减少SIFT算法产生的特征点及其抽取和匹配的时间.在机器人移动过程中,环境特征点的观测信息和里程计信息通过粒子滤波相融合,获得了更准确的环境标志点坐标.仿真实验结果验证了该方法的有效性.  相似文献   

20.
采用嵌入式EPXA10为核心,以污水车自动泄放机器人为应用对象,本文介绍一种双目视觉定位系统。提出一种利用模糊算法进行目标图像的边缘检测,并结合双目视觉定位的算法实现机器视觉定位的方法。针对污水泄放机器人的工作特点,采用主动寻找特殊图的方法,达到快速准确对目标跟踪定位的目的,完成对污水车泄放口的三维定位,机器人根据定位坐标可以准确地接通污水泄放管,快速把污水泄放到池中。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号