首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 46 毫秒
1.
针对小型无人飞行器跟踪目标的问题,提出了一种基于双目视觉和Camshift算法的无人飞行器目标跟踪以及定位算法。双目相机得到的左右图像通过Camshift算法处理可得到目标中心特征点,对目标中心特征点进行三维重建,得到机体坐标系下无人飞行器与目标间的相对位置和偏航角,应用卡尔曼滤波算法对测量值进行了优化,将所得估计值作为飞行控制系统的反馈输入值,实现了无人飞行器自主跟踪飞行。结果表明所提算法误差较小,具有较高的稳定性与精确性。  相似文献   

2.
于乃功  黄灿  林佳 《计算机测量与控制》2012,20(10):2654-2656,2660
为实现具有单目视觉的双足机器人对目标物体的定位测距,根据针孔成像和几何坐标变换原理,提出一种基于单目视觉系统自身固定参数计算目标物体深度信息的几何测距方法;该方法利用双足机器人自带的单目视觉系统的固定参数,通过几何映射关系求解目标物体在三维空间内的坐标位置及相对于机器人的距离,克服了由于外界环境变化以及对参照物识别时所产生的误差影响测距定位精度的问题,从而实现基于单目视觉的目标物体精确跟踪定位;并对所提出方法进行了实验,得到比理论结果误差较小的实验数据,证明了方法的可行性。  相似文献   

3.
为解决针对三维立体屏幕的定位问题,提出了基于双目视觉与目标识别、拟合的屏幕定位算法。首先利用边缘检测算法提取屏幕轮廓,获取屏幕在二维图像中的位置;其次利用双目视觉算法对识别出的屏幕进行深度数据采集,利用最小二乘法对采集的屏幕空间数据进行空间拟合得到屏幕的空间方程;最后计算屏幕顶点在三维空间中的位置从而确定屏幕范围,至此完成在三维空间中对屏幕的定位。实验结果表明,该方法计算得到的屏幕位置具有较高准确度。  相似文献   

4.
视觉目标检测旨在定位和识别图像中存在的物体,属于计算机视觉领域的经典任务之一,也是许多计算机视觉任务的前提与基础,在自动驾驶、视频监控等领域具有重要的应用价值,受到研究人员的广泛关注。随着深度学习技术的飞速发展,目标检测取得了巨大的进展。首先,本文总结了深度目标检测在训练和测试过程中的基本流程。训练阶段包括数据预处理、检测网络、标签分配与损失函数计算等过程,测试阶段使用经过训练的检测器生成检测结果并对检测结果进行后处理。然后,回顾基于单目相机的视觉目标检测方法,主要包括基于锚点框的方法、无锚点框的方法和端到端预测的方法等。同时,总结了目标检测中一些常见的子模块设计方法。在基于单目相机的视觉目标检测方法之后,介绍了基于双目相机的视觉目标检测方法。在此基础上,分别对比了单目目标检测和双目目标检测的国内外研究进展情况,并展望了视觉目标检测技术发展趋势。通过总结和分析,希望能够为相关研究人员进行视觉目标检测相关研究提供参考。  相似文献   

5.
计算机视觉是一门新兴的发展迅速的学科,计算机视觉的研究已经历了从实验室走向实际应用的发展阶段.由于视觉信息容量大,在实际应用中直观有效,所以运用视觉来寻找和确定目标的方位是机器人发展中一个很重要的方法,近年来它广泛应用于工业自动化装配领域中,同时对视觉系统的要求也越来越高.文中介绍并分析了当前国内外基于视觉的几种主要的目标定位方法在实际中的应用,例如移动机器人自主导航定位系统、手眼立体视觉系统等.  相似文献   

6.
采用Eye-to-hand方式固定安装摄像头,实现单目视觉引导的五自由度机械手定位方法.搭建了以STM32F4系列芯片为主控芯片的控制平台,运用蒙特-卡罗数值分析法求解关节机械手工作空间云点图,进行三次多项式插值.最后以三维积木道具作为实验对象,利用改进后的Harris角点检测算法结合人工辅助标识点,在多种规格积木中提...  相似文献   

7.
单目视觉跟踪定位、双目视觉跟踪定位和多目视觉传感器跟踪定位是当前的计算机主要的视觉定位跟踪方式,但是由于利用双目视觉跟踪定位和多目视觉传感器中存在着视场小、系统结构庞大、立体匹配难等缺陷,在很多工业制造场合已逐渐被标定步骤少、结构简单的单目视觉所代替。本实验通过单目视觉方式实现了对运动目标跟踪定位,并设计了基于单摄像头的运动目标跟踪定位系统,采用CamShift算法通过目标颜色信息选取对目标进行识别、提取以及检测,通过基于邻域线性搜索与卡尔曼滤波器相结合的跟踪算法,准确实现运动目标的识别和定位,并进行了一系列相应的理论实验验证与分析,给出了最终的实验结果。  相似文献   

8.
基于双目立体视觉的番茄识别与定位技术   总被引:8,自引:0,他引:8  
郑小东  赵杰文  刘木华 《计算机工程》2004,30(22):155-156,171
根据颜色特征利用阈值自动设定的方法对图像进行分割,自动、快速识别红色番茄;采用形心匹配取代常规的特征点选择和匹配方法,对双目立体成像测距公式进行了修正,经过验证,当工作距离小于500mm时,距离误差可以控制在±10mm以内。  相似文献   

9.
自主目标识别与定位问题是智能化林业机器人工作的重要基础.以林业环境中树干识别及定位为目标,设计一种基于双目视觉的数字视频实时处理系统硬件平台.使用双目摄像头采集图像,并对采集信息进行三维信息计算,输出目标定位与测距结果.实验结果表明,该硬件平台可以完成图像采集及处理功能,达到预期的实验效果.  相似文献   

10.
针对机器人采用示教方式码垛时所遇到的误抓取等问题,提出了一种基于双目视觉引导机器人码垛的定位方法。完成了双目标定、手眼标定以及极线校正工作,利用灰度变换与图像滤波相融合的算法对图像进行预处理,提高图像的质量。将图像的灰度级分布与立体匹配算法相结合,提高立体匹配的精度,获得更好的视差图像,对视差图像进行处理得到了码垛产品区域的中心点,然后结合平行双目系统和手眼标定的结果引导机器人对产品进行定位、码垛。实验表明:该方法可实现对产品区域中心点的精确定位,获得其坐标值,引导机器人码垛。  相似文献   

11.
This paper presents a novel vision-based global localization that uses hybrid maps of objects and spatial layouts. We model indoor environments with a stereo camera using the following visual cues: local invariant features for object recognition and their 3D positions for object pose estimation. We also use the depth information at the horizontal centerline of image where the optical axis passes through, which is similar to the data from a 2D laser range finder. This allows us to build our topological node that is composed of a horizontal depth map and an object location map. The horizontal depth map describes the explicit spatial layout of each local space and provides metric information to compute the spatial relationships between adjacent spaces, while the object location map contains the pose information of objects found in each local space and the visual features for object recognition. Based on this map representation, we suggest a coarse-to-fine strategy for global localization. The coarse pose is estimated by means of object recognition and SVD-based point cloud fitting, and then is refined by stochastic scan matching. Experimental results show that our approaches can be used for an effective vision-based map representation as well as for global localization methods.  相似文献   

12.
Vision-based Target Geo-location using a Fixed-wing Miniature Air Vehicle   总被引:1,自引:0,他引:1  
This paper presents a method for determining the GPS location of a ground-based object when imaged from a fixed-wing miniature air vehicle (MAV). Using the pixel location of the target in an image, measurements of MAV position and attitude, and camera pose angles, the target is localized in world coordinates. The main contribution of this paper is to present four techniques for reducing the localization error. In particular, we discuss RLS filtering, bias estimation, flight path selection, and wind estimation. The localization method has been implemented and flight tested on BYU’s MAV testbed and experimental results are presented demonstrating the localization of a target to within 3 m of its known GPS location.  相似文献   

13.
基于单目视觉的汽车防偏防追尾预警系统研究   总被引:1,自引:0,他引:1  
皮燕妮  史忠科  黄金 《计算机仿真》2005,22(10):228-231
该文设计了一个基于单目视觉的汽车防偏防追尾预警系统,应用于主要障碍物为车辆的结构化道路环境.利用视觉传感器提供的道路图像序列亮度、纹理、形状等丰富特征,对汽车前方道路和车辆进行检测和跟踪,根据其危险程度,向驾驶员提供可靠的防偏指令和防追尾预警信号,并辅以声光报警.介绍了系统软硬件结构,给出了主程序流程图.对定位前方道路及车辆的算法和预警方案等关键技术进行了研究,给出了系统可行性和有效性验证的仿真和初步试验结果.  相似文献   

14.
An autonomous mobile robot must have the ability to navigate in an unknown environment. The simultaneous localization and map building (SLAM) problem have relation to this autonomous ability. Vision sensors are attractive equipment for an autonomous mobile robot because they are information-rich and rarely have restrictions on various applications. However, many vision based SLAM methods using a general pin-hole camera suffer from variation in illumination and occlusion, because they mostly extract corner points for the feature map. Moreover, due to the narrow field of view of the pin-hole camera, they are not adequate for a high speed camera motion. To solve these problems, this paper presents a new SLAM method which uses vertical lines extracted from an omni-directional camera image and horizontal lines from the range sensor data. Due to the large field of view of the omni-directional camera, features remain in the image for enough time to estimate the pose of the robot and the features more accurately. Furthermore, since the proposed SLAM does not use corner points but the lines as the features, it reduces the effect of illumination and partial occlusion. Moreover, we use not only the lines at corners of wall but also many other vertical lines at doors, columns and the information panels on the wall which cannot be extracted by a range sensor. Finally, since we use the horizontal lines to estimate the positions of the vertical line features, we do not require any camera calibration. Experimental work based on MORIS, our mobile robot test bed, moving at a human’s pace in the real indoor environment verifies the efficacy of this approach.  相似文献   

15.
基于视觉的水下管线识别与定位系统   总被引:1,自引:0,他引:1  
论述了在水环境下如何通过视觉识别管线并给以精确的定位。提出了水下视觉系统的总体设计方案。在识别目标时,以目标的颜色为特征,并将颜色从RGB窨转换到HSI空间,使图象处理的速度和特征抽取的鲁棒性得到提高。对水下环境中点和线的精确定位进行了详细的分析,给出定位算法。最后的实验结果证明了算法的有效性。  相似文献   

16.
熊雨农  李宏 《测控技术》2023,42(1):28-34
视觉测量在飞行试验高精度测试中发挥着重要作用。针对常规视觉测量中多标志点识别定位存在误识别、定位精度低等问题,设计了一种新的环形编码对角标志,并提出了一种基于环形编码对角标志识别定位算法的视觉测量技术。重点介绍了基于EDCircles圆检测算法的标志点识别、基于Harries角点检测算法的亚像素级定位和基于坐标变换的编码信息解码等技术的实现。经过实验验证,研究成果编码标志点识别定位算法识别准确、鲁棒性强、定位精度高,在大投影角度下仍能达到92%的识别率,可以满足飞行试验动态视觉测量任务相关需求。  相似文献   

17.
Stereo vision specific models for particle filter-based SLAM   总被引:1,自引:0,他引:1  
F.A.  J.L.  J.   《Robotics and Autonomous Systems》2009,57(9):955-970
  相似文献   

18.
While the most accurate solution to off-line structure from motion (SFM) problems is undoubtedly to extract as much correspondence information as possible and perform batch optimisation, sequential methods suitable for live video streams must approximate this to fit within fixed computational bounds. Two quite different approaches to real-time SFM – also called visual SLAM (simultaneous localisation and mapping) – have proven successful, but they sparsify the problem in different ways. Filtering methods marginalise out past poses and summarise the information gained over time with a probability distribution. Keyframe methods retain the optimisation approach of global bundle adjustment, but computationally must select only a small number of past frames to process.  相似文献   

19.
This paper presents a new method for three dimensional object tracking by fusing information from stereo vision and stereo audio. From the audio data, directional information about an object is extracted by the Generalized Cross Correlation (GCC) and the object’s position in the video data is detected using the Continuously Adaptive Mean shift (CAMshift) method. The obtained localization estimates combined with confidence measurements are then fused to track an object utilizing Particle Swarm Optimization (PSO). In our approach the particles move in the 3D space and iteratively evaluate their current position with regard to the localization estimates of the audio and video module and their confidences, which facilitates the direct determination of the object’s three dimensional position. This technique has low computational complexity and its tracking performance is independent of any kind of model, statistics, or assumptions, contrary to classical methods. The introduction of confidence measurements further increases the robustness and reliability of the entire tracking system and allows an adaptive and dynamical information fusion of heterogenous sensor information.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号