首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 170 毫秒
1.
事件相机是一种启发式传感器,它通过感知光线强度变化输出事件,响应异步和稀疏事件形式的像素级亮度变化,缓解了传统相机在光线条件变化复杂和物体高速运动场景下成像不清晰的问题.最近,基于学习的模式识别方法将事件相机的输出转化为伪图像的表示形式,在光流估计、目标识别等视觉任务中取得了巨大的进步.但是,这类方法丢弃了事件流之间的...  相似文献   

2.
基于事件相机的机器人感知与控制综述   总被引:1,自引:0,他引:1  
事件相机作为一种新型动态视觉传感器, 通过各个像素点独立检测光照强度变化并异步输出“事件流”信号, 具有数据量小、延迟低、动态范围高等优秀特性, 给机器人控制带来新的可能. 本文主要介绍了近年来涌现的一系列事件相机与无人机、机械臂和人形机器人等机器人感知与运动控制结合的研究成果, 同时聚焦基于事件相机的控制新方法、新原理以及控制效果, 并指出基于事件相机的机器人控制的应用前景和发展趋势.  相似文献   

3.
高成强  张云洲  王晓哲  邓毅  姜浩 《机器人》2019,41(3):372-383
为了解决室内动态环境下移动机器人的准确定位问题,提出了一种融合运动检测算法的半直接法RGB-D视觉SLAM(同时定位与地图创建)算法,它由运动检测、相机位姿估计、基于TSDF (truncated signed distance function)模型的稠密地图构建3个步骤组成.首先,通过最小化图像光度误差,利用稀疏图像对齐算法实现对相机位姿的初步估计.然后,使用视觉里程计的位姿估计对图像进行运动补偿,建立基于图像块实时更新的高斯模型,依据方差变化分割出图像中的运动物体,进而剔除投影在图像运动区域的局部地图点,通过最小化重投影误差对相机位姿进行进一步优化,提升相机位姿估计精度.最后,使用相机位姿和RGB-D相机图像信息构建TSDF稠密地图,利用图像运动检测结果和地图体素块的颜色变化,完成地图在动态环境下的实时更新.实验结果表明,在室内动态环境下,本文算法能够有效提高相机位姿估计精度,实现稠密地图的实时更新,在提升系统鲁棒性的同时也提升了环境重构的准确性.  相似文献   

4.
付婧祎  余磊  杨文  卢昕 《自动化学报》2023,(9):1845-1856
事件相机对场景的亮度变化进行成像,输出异步事件流,具有极低的延时,受运动模糊问题影响较少.因此,可以利用事件相机解决高速运动场景下的光流(Optical flow, OF)估计问题.基于亮度恒定假设和事件产生模型,利用事件相机输出事件流的低延时性质,融合存在运动模糊的亮度图像帧,提出基于事件相机的连续光流估计算法,提升了高速运动场景下的光流估计精度.实验结果表明,相比于现有的基于事件相机的光流估计算法,该算法在平均端点误差、平均角度误差和均方误差3个指标上,分别提升11%、45%和8%.在高速运动场景下,该算法能够准确重建出高速运动目标的连续光流,保证了存在运动模糊情况时,光流估计的精度.  相似文献   

5.
在新一代人工智能领域中,神经形态视觉是类脑计算的一个重要研究方向。事件相机具有低功耗、低信息冗余以及高动态范围等优点,在智能飞行器、敏捷机器人的自主控制场景中具有重要应用价值。本文根据事件序列的时空特性,研究基于局部平面拟合的光流估计原理,提出一种运用特征值法进行局部平面拟合来估计光流的算法,并采用随机抽样一致方法进一步提高算法的稳健性。实验表明,本文方法能够有效进行神经形态视觉的光流估计,并且对噪声具有一定的稳健性。  相似文献   

6.
在无人机视觉辅助惯性导航系统中,不确定延时的图像数据在无人机室内导航中是无法满足与其他传感器同步要求的,因此准确估计视觉传感器与惯性测量单元(IMU)之间的相对延时是非常重要的.本文提出了一种可以有效估计图像延时的方法,并根据延时进行视觉数据的延时补偿,最后利用扩展卡尔曼滤波(EKF)实现IMU数据与视觉数据的融合,从...  相似文献   

7.
基于事件相机的定位与建图算法: 综述   总被引:1,自引:0,他引:1  
事件相机是一种新兴的视觉传感器, 通过检测单个像素点光照强度的变化来产生“事件”. 基于其工作原理, 事件相机拥有传统相机所不具备的低延迟、高动态范围等优良特性. 而如何应用事件相机来完成机器人的定位与建图则是目前视觉定位与建图领域新的研究方向. 本文从事件相机本身出发, 介绍事件相机的工作原理、现有的定位与建图算法以及事件相机相关的开源数据集. 其中, 本文着重对现有的、基于事件相机的定位与建图算法进行详细的介绍和优缺点分析.  相似文献   

8.
针对球形机器人定位问题,提出了基于立体视觉的球形机器人定位方法.通过双目相机采集环境图像序列,提取Shi-Tomasi特征点,计算尺度不变特征变换(SIFT)特征描述符,并利用欧氏距离进行立体匹配;通过KLT算法进行特征点跟踪;采用解析法求解机器人在前后帧图像之间的位姿变化量;同时采用特征点筛选、RANSAC算法和卡尔曼滤波等方法,提高运动估计的准确性和鲁棒性.实验结果验证了所提出方法的可行性.  相似文献   

9.
视频是视觉信息处理的基础概念,传统视频的帧率只有几十Hz,不能记录光的高速变化过程,成为限制机器视觉速度的天花板,其根本原因在于视频概念脱胎于胶片成像,未能发挥电子和数字技术的潜力。脉冲视觉模型通过感光器件捕获光子,累积能量达到约定阈值时产生脉冲,形成脉冲的时间越长,表明收到的光信号越弱,反之光信号越强,据此可估计任意时刻的光强,从而实现连续成像。采用普通器件,研制了比影视视频快千倍的超高速成像芯片和相机,进而基于脉冲神经网络实现了超高速目标检测、跟踪和识别,打破了机器视觉提速依赖算力线性增长的传统范式。本文从脉冲视觉模型表达视觉信息的生物学基础和物理原理出发,介绍了脉冲视觉原理的软件模拟器及其模拟真实世界光子传播的计算过程,描述了基于脉冲视觉原理的高灵敏光电传感器件及芯片的工作机理和结构设计、基于脉冲视觉的影像重建原理以及脉冲视觉信号与普通图像信号融合的计算摄像算法与计算摄像系统,介绍了基于脉冲神经网络的超高速运动目标检测、跟踪与识别,通过对比国际国内相关研究内容和发展现状,展望了脉冲视觉的发展与演进方向。脉冲视觉芯片和系统在工业(高铁、电力和轮机等不停机监测,智能制造高速监视等)、民用(高速相机、智能交通、辅助驾驶、司法取证和体育判罚等)以及国防(高速对抗)等领域都具有巨大应用潜力,是未来值得重点关注和研究的一个重要方向。  相似文献   

10.
事件相机因其生物视觉的启发渊源,打破了计算机视觉领域的常规数据获取方式,直击计算机视觉领域中RGB图像的痛点,带来了二维图像传感器无法比拟的优势,引起了广大研究者的密切关注.事件相机带来去除冗余信息、快速感知能力、高动态范围的感光能力和低功耗特性等优势的同时,其异步的事件数据无法直接应用于现有的计算机视觉处理模式.因此,利用基于关键事件点的分类方法对事件相机的数据流进行分类.该方法检测带有重要信息的角点事件,并只对角点事件进行特征提取.在保留事件重要特征和凝练提取事件流特征的同时,有效地减少了对其他事件的运算量.对预设手势进行识别,以此验证该方法的有效性,实现了97.86%的准确率.  相似文献   

11.
双目视觉测量传感器研究   总被引:2,自引:0,他引:2  
研究了按照激光三角测量原理由 1个线激光器及 2个CCD摄像机组成的双目视觉测量传感器。探讨了 2个CCD摄像机的摆放位置及其对测量精度的影响,确定了其几何参数。在此基础上,论述了其工作过程,包括其标定方法、标定过程、扫描图像采集、特征提取、特征匹配及三维数据生成。该双目视觉测量传感器配合扫描机构(如三坐标测量机 )即可对物体进行非接触式三维激光扫描测量,其测量速度快,精度比较高,可以应用于逆向工程领域。  相似文献   

12.
Event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. They offer significant advantages over standard cameras, namely a very high dynamic range, no motion blur, and a latency in the order of microseconds. However, because the output is composed of a sequence of asynchronous events rather than actual intensity images, traditional vision algorithms cannot be applied, so that a paradigm shift is needed. We introduce the problem of event-based multi-view stereo (EMVS) for event cameras and propose a solution to it. Unlike traditional MVS methods, which address the problem of estimating dense 3D structure from a set of known viewpoints, EMVS estimates semi-dense 3D structure from an event camera with known trajectory. Our EMVS solution elegantly exploits two inherent properties of an event camera: (1) its ability to respond to scene edges—which naturally provide semi-dense geometric information without any pre-processing operation—and (2) the fact that it provides continuous measurements as the sensor moves. Despite its simplicity (it can be implemented in a few lines of code), our algorithm is able to produce accurate, semi-dense depth maps, without requiring any explicit data association or intensity estimation. We successfully validate our method on both synthetic and real data. Our method is computationally very efficient and runs in real-time on a CPU.  相似文献   

13.
Real-time people localization and tracking through fixed stereo vision   总被引:1,自引:1,他引:0  
Detecting, locating, and tracking people in a dynamic environment is important in many applications, ranging from security and environmental surveillance to assistance to people in domestic environments, to the analysis of human activities. To this end, several methods for tracking people have been developed in the field of Computer Vision using different settings, such as monocular cameras, stereo sensors, multiple cameras. In this article we describe a method for People Localization and Tracking (PLT) based on a calibrated fixed stereo vision sensor, its implementation and experimental results. The system analyzes three components of the stereo data (the left intensity image, the disparity image, and the 3-D world locations of measured points) to dynamically update a model of the background; extract foreground objects, such as people and rearranged furniture; track their positions in the world. The system is mostly suitable for indoor medium size environments. It can reliably detect and track people moving in an medium size area (a room or a corridor) in front of the sensor with high reliability and good precision.  相似文献   

14.
Vergence provides robot vision systems with a crucial degree of freedom: it enables fixation of points in visual space at different distances from the observer. Vergence control, therefore, affects the performance of the stereo system as well as the results of motion estimation and tracking and, as such, must satisfy different requirements in order to be able to provide not only a stable fixation, but a stable binocular fusion, and a fast, smooth and accurate reaction to changes in the environment. To obtain this kind of performance the paper focuses specifically on the use of dynamic visual information to drive vergence control. In this context, moreover, the use of a space-variant, anthropomorphic sensor is described and some advantages in relation to vergence control are discussed to demonstrate the relevance of image plane geometry for this particular task. Expansion or contraction patterns and the temporal evolution of the degree of fusion measured in the log-polar domain are the inputs to the vergence control system and determine robust and accurate steering of the two cameras. Real-time experiments are presented to demonstrate the performance of the system covering different key situations.  相似文献   

15.
深度学习单目深度估计研究进展   总被引:1,自引:0,他引:1       下载免费PDF全文
单目深度估计是从单幅图像中获取场景深度信息的重要技术,在智能汽车和机器人定位等领域应用广泛,具有重要的研究价值。随着深度学习技术的发展,涌现出许多基于深度学习的单目深度估计研究,单目深度估计性能也取得了很大进展。本文按照单目深度估计模型采用的训练数据的类型,从3个方面综述了近年来基于深度学习的单目深度估计方法:基于单图像训练的模型、基于多图像训练的模型和基于辅助信息优化训练的单目深度估计模型。同时,本文在综述了单目深度估计研究常用数据集和性能指标基础上,对经典的单目深度估计模型进行了性能比较分析。以单幅图像作为训练数据的模型具有网络结构简单的特点,但泛化性能较差。采用多图像训练的深度估计网络有更强的泛化性,但网络的参数量大、网络收敛速度慢、训练耗时长。引入辅助信息的深度估计网络的深度估计精度得到了进一步提升,但辅助信息的引入会造成网络结构复杂、收敛速度慢等问题。单目深度估计研究还存在许多的难题和挑战。利用多图像输入中包含的潜在信息和特定领域的约束信息,来提高单目深度估计的性能,逐渐成为了单目深度估计研究的趋势。  相似文献   

16.
In this paper, we use computer vision as a feedback sensor in a control loop for landing an unmanned air vehicle (UAV) on a landing pad. The vision problem we address here is then a special case of the classic ego-motion estimation problem since all feature points lie on a planar surface (the landing pad). We study together the discrete and differential versions of the ego-motion estimation, in order to obtain both position and velocity of the UAV relative to the landing pad. After briefly reviewing existing algorithm for the discrete case, we present, in a unified geometric framework, a new estimation scheme for solving the differential case. We further show how the obtained algorithms enable the vision sensor to be placed in the feedback loop as a state observer for landing control. These algorithms are linear, numerically robust, and computationally inexpensive hence suitable for real-time implementation. We present a thorough performance evaluation of the motion estimation algorithms under varying levels of image measurement noise, altitudes of the camera above the landing pad, and different camera motions relative to the landing pad. A landing controller is then designed for a full dynamic model of the UAV. Using geometric nonlinear control theory, the dynamics of the UAV are decoupled into an inner system and outer system. The proposed control scheme is then based on the differential flatness of the outer system. For the overall closed-loop system, conditions are provided under which exponential stability can be guaranteed. In the closed-loop system, the controller is tightly coupled with the vision based state estimation and the only auxiliary sensor are accelerometers for measuring acceleration of the UAV. Finally, we show through simulation results that the designed vision-in-the-loop controller generates stable landing maneuvers even for large levels of image measurement noise. Experiments on a real UAV will be presented in future work.  相似文献   

17.
Many computer vision applications can benefit from omnidirectional vision sensing, rather than depending solely on conventional cameras that have constrained fields of view. For example, mobile robots often require a full 360 view of their environment in order to perform navigational tasks such identifying landmarks, localizing within the environment, and determining free paths in which to move. There has been much research interest in omnidirectional vision in the past decade and many techniques have been developed. These techniques include: (i) catadioptric methods which can provide rapid image acquisition, but lack image resolution; and (ii) mosaicing and linear scanning techniques which have high image resolution but typically have slow image acquisition speed. In this paper, we introduce a novel linear scanning panoramic vision system that can acquire panoramic images quickly with little loss of image resolution. The system makes use of a fast line-scan camera, instead of a slower, conventional area-scan camera. In addition, a unique coarse-to-fine panoramic imaging technique has been developed that is based on smart sensing principles. Using the active vision paradigm, we control the motion of the rotating camera using feedback from the images. This results in high acquisition speeds and proportionally low storage requirements. Experimentation has been carried out, and results are given. Correspondence to: M.J. Barth (e-mail: barth@ee.ucr.edu)  相似文献   

18.
梁潇  李原  梁自泽  侯增广  徐德  谭民 《机器人》2007,29(5):0-450
近年来视觉传感器在工业自动化和机器人导航领域得到越来越多的应用。本文提出了一种基于DSP微处理器的视觉传感器的设计与实现。视觉传感器采集环境图像并在DSP内核处理器执行图像处理算法,得到决策结果后直接输出给控制系统执行,从而避免了传输大量图像数据所需要的高带宽通讯通道的使用。开发的视觉传感器具有体积小、实时性能好、可扩展性强等特点,并且提供了常用的图像处理软件支持包。文中对系统的软件和硬件开发进行了详细阐述,最后在焊缝自动跟踪平台上的应用验证了传感器的实际整体性能可满足实际应用的需要。关于视觉传感器的下一步工作在最后进行了讨论。  相似文献   

19.
基于共面靶标双目立体视觉传感器标定   总被引:1,自引:0,他引:1  
根据双目立体视觉传感器的特点,提出了一种基于共面靶标的双目立体视觉传感器的标定方法.在不需要外部测量设备的情况下,通过共面靶标在平面上自由移动,获得靶标的标定点图像坐标,利用类似于Dr.Janne Heikkil(a)和Dr.Olli Silven所提出来的摄像机内部参数标定模型,采用线性和非线性结合的方法对双目摄像机进行摄像机内部参数的标定.试验结果表明,标定参数与出厂参数有一定的差异,但和实际情况还是吻合的,并且标定方法操作简便,切实可行.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号