首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
《Advanced Robotics》2013,27(2-3):381-393
This paper proposes an image-based visual servo control method for a micro helicopter. The helicopter does not have any sensors to measure its position or posture on the body. A stationary camera is placed on the ground and it obtains image features of the helicopter. The differences between current features and given reference features are computed. PID controllers then make control input voltages for helicopter control and they drive the helicopter. The proposed controller can avoid some major difficulties in computer vision such as numerical instability due to image noise or model uncertainties, since the reference is defined in the image frames. An experimental result demonstrates that the proposed controller can keep the helicopter in a stable hover.  相似文献   

2.
Accurate measurement of the position of features in an image is subject to a fundamental compromise: The features must be both small, to limit the effect of nonlinear distortions, and large, to limit the effect of noise and discretization. This constrains both the accuracy and the robustness of image measurements, which play an important role in geometric camera calibration as well as in all subsequent measurements based on that calibration. In this paper, we present a new geometric camera calibration technique that exploits the complete camera model during the localization of control markers, thereby abolishing the marker size compromise. Large markers allow a dense pattern to be used instead of a simple disc, resulting in a significant increase in accuracy and robustness. When highly planar markers are used, geometric camera calibration based on synthetic images leads to true errors of 0.002 pixels, even in the presence of artifacts such as noise, illumination gradients, compression, blurring, and limited dynamic range. The camera parameters are also accurately recovered, even for complex camera models.  相似文献   

3.
《Advanced Robotics》2013,27(6-7):805-823
This paper addresses a vision-based method for estimating vibration excited in the tip of a flexible-link manipulator. In this method, estimation of vibration is achieved by observing the variation of image features projected on a wrist camera. It mimics the situation of utilizing a wrist camera in tip vibration control of a space manipulator. In space, a vision sensor can be expected to be a feasible means for measuring the elastic vibration of the space manipulators, since they are more reliable compared with sensors like strain gauges. The method proposed in this paper takes advantage of the frequential characteristics of visual information that are reflected as a blurred background scene. With the high-frequency component of the projected image features, a Kalman filter-based observer is implemented as the estimator for the vibration. This implementation is characterized by the considerations of incorporating the slow sensor of the camera in the fast servo loop and compensation of the time delay due to image processing. With the vibration estimator, vibration suppression control relying solely on a wrist camera becomes possible. This scheme is successfully verified by experiments.  相似文献   

4.
It is necessary to estimate the weld bead width and depth of penetration using suitable sensors during welding to monitor weld quality. Among the vision sensors, infra red sensing is the natural choice for monitoring welding processes as welding is inherently a thermal processing method. An attempt has been made to estimate the weld bead width and depth of penetration from the infra red thermal image of the weld pool using artificial neural network models during A-TIG welding of 3?mm thick type 316 LN stainless steel plates. Real time infra red images were captured using IR camera for the entire weld length during A-TIG welding at various current values. The image features such as length and width of the hot spot, peak temperature, and other features using line scan analysis are extracted using image processing techniques corresponding to particular locations of the weld joint. These parameters along with their respective current values are used as inputs while the measured weld bead width and depth of penetration are used as output of the neural network models. Accurate ANN models predicting weld bead width (9-11-1) and depth of penetration (9-9-1) have been developed. The correlation coefficient values obtained were 0.98862 and 0.99184 between the measured and predicted values of weld bead width and depth of penetration respectively.  相似文献   

5.
We propose camera models for cameras that are equipped with lenses that can be tilted in an arbitrary direction (often called Scheimpflug optics). The proposed models are comprehensive: they can handle all tilt lens types that are in common use for machine vision and consumer cameras and correctly describe the imaging geometry of lenses for which the ray angles in object and image space differ, which is true for many lenses. Furthermore, they are versatile since they can also be used to describe the rectification geometry of a stereo image pair in which one camera is perspective and the other camera is telecentric. We also examine the degeneracies of the models and propose methods to handle the degeneracies. Furthermore, we examine the relation of the proposed camera models to different classes of projective camera matrices and show that all classes of projective cameras can be interpreted as cameras with tilt lenses in a natural manner. In addition, we propose an algorithm that can calibrate an arbitrary combination of perspective and telecentric cameras (no matter whether they are tilted or untilted). The calibration algorithm uses a planar calibration object with circular control points. It is well known that circular control points may lead to biased calibration results. We propose two efficient algorithms to remove the bias and thus obtain accurate calibration results. Finally, we perform an extensive evaluation of the proposed camera models and calibration algorithms that establishes the validity and accuracy of the proposed models.  相似文献   

6.
Active camera calibration using pan, tilt and roll   总被引:1,自引:0,他引:1  
Three dimensional vision applications, such as robot vision, require modeling of the relationship between the two-dimensional images and the three-dimensional world. Camera calibration is a process which accurately models this relationship. The calibration procedure determines the geometric parameters of the camera, such as focal length and center of the image. Most of the existing calibration techniques use predefined patterns and a static camera. Recently, a novel calibration technique for computing the focal length and image center, which uses an active camera, has been developed. This technique does not require any predefined patterns or point-to-point correspondence between images-only a set of scenes with some stable edges. It was observed that the algorithms developed for the image center are sensitive to noise and hence unreliable in real situations. This report extends the techniques provided to develop a simpler, yet more robust method for computing the image center.  相似文献   

7.
Intelligent visual surveillance — A survey   总被引:3,自引:0,他引:3  
Detection, tracking, and understanding of moving objects of interest in dynamic scenes have been active research areas in computer vision over the past decades. Intelligent visual surveillance (IVS) refers to an automated visual monitoring process that involves analysis and interpretation of object behaviors, as well as object detection and tracking, to understand the visual events of the scene. Main tasks of IVS include scene interpretation and wide area surveillance control. Scene interpretation aims at detecting and tracking moving objects in an image sequence and understanding their behaviors. In wide area surveillance control task, multiple cameras or agents are controlled in a cooperative manner to monitor tagged objects in motion. This paper reviews recent advances and future research directions of these tasks. This article consists of two parts: The first part surveys image enhancement, moving object detection and tracking, and motion behavior understanding. The second part reviews wide-area surveillance techniques based on the fusion of multiple visual sensors, camera calibration and cooperative camera systems.  相似文献   

8.
基于非标定序列影像的目标三维重建是一项非常重要的技术和研究热点,它使数据获取变得十分方便。基于影像序列的点匹配,得到的是一些点云,基于此,提出一个混合的三维重建方法:第一,通过物体三维点建立物体的数字形状模型(DSM);第二,通过提取物体轮廓线,尤其是相互的平行直段和垂直线段,构建物体的轮廓线;第三,给合现存的三维数据模型,在目标显示和数据结构方面构建恢复三维物体。实验以一个茶筒为例,采用Java3D显示结果,取得良好的结果。  相似文献   

9.
采用Micro2440开发板和OV9650摄像头,成功获取了视频图像,并且能够对视频中的每一帧图像同步进行自定义的图像处理,其处理的结果同时能够在LCD上予以显示.实验结果证明该视觉处理平台具有良好的视觉处理特性,适合作为数字图像处理平台,为进一步的图像处理、功能扩展以及嵌入式应用创造了条件.  相似文献   

10.
Visual Surveillance for Moving Vehicles   总被引:9,自引:2,他引:7  
An overview is given of a vision system for locating, recognising and tracking multiple vehicles, using an image sequence taken by a single camera mounted on a moving vehicle. The camera motion is estimated by matching features on the ground plane from one image to the next. Vehicle detection and hypothesis generation are performed using template correlation and a 3D wire frame model of the vehicle is fitted to the image. Once detected and identified, vehicles are tracked using dynamic filtering. A separate batch mode filter obtains the 3D trajectories of nearby vehicles over an extended time. Results are shown for a motorway image sequence.  相似文献   

11.
An unmanned aerial vehicle usually carries an array of sensors whose output is used to estimate vehicle attitude, velocity, and position. This paper details the development of guidance, navigation, and control strategies for a glider, which is capable of flying a terminal trajectory to a known fixed object using only a single vision sensor. Controlling an aircraft using only vision presents two unique challenges: First, absolute state measurements are not available from a single image; and second, the images must be collected and processed at a high rate to achieve the desired controller performance. The image processor utilizes an integral image representation and a rejective cascade filter to find and classify simple features in the images, reducing the image to the most probable pixel location of the destination object. Then, an extended Kalman filter uses measurements obtained from a single image to estimate the states that would otherwise be unobservable in a single image. In this research, the flights are constrained to keep the destination object in view. The approach is validated through simulation. Finally, experimental data from autonomous flights of a glider, instrumented only with a single nose‐mounted camera, intercepting a target window during short low‐level flights, are presented. © 2006 Wiley Periodicals, Inc.  相似文献   

12.
自由拍摄视点下的可见外壳生成算法   总被引:2,自引:0,他引:2  
给出了一种使用手持相机拍摄的多幅图像构造可见外壳的方法,该方法无需对相机运动方式作任何限定,而是利用多幅图像上的对应特征点求出每一幅图像的拍摄方位,因而可以非常灵活地获取某些较大规模的室外场景的几何模型.实验结果表明,利用该方法生成的可见外壳模型准确真实,能够满足虚拟现实等应用中的要求.  相似文献   

13.
单目视觉定位方法研究综述   总被引:1,自引:0,他引:1  
根据单目视觉定位所用图像帧数不同把定位方法分为基于单帧图像的定位和基于双帧或多帧图像的定位两类。单帧图像定位常利用已知的点特征、直线特征或曲线特征与其在图像上的投影关系进行定位,其中用点特征和直线特征的定位方法简单有效而应用较多;基于双帧或多帧图像的定位方法因其操作复杂、精度不高而研究的还较少。通过对各方法的介绍和评述,为单目视觉定位问题的研究提供参考。  相似文献   

14.
SLAM(即时定位与地图构建)系统是近年来计算机视觉领域的一大重要课题,其中特征法的SLAM凭借稳定性好、计算效率高的优点成为SLAM算法的主流。目前特征法SLAM主要基于点特征进行。针对基于点特征的视觉里程计依赖于数据质量,相机运动过快时容易跟丢,且生成的特征地图不包含场景结构信息等缺点,提出了一种基于点线结合特征的优化算法。相较于传统基于线段端点的六参数表达方式,算法采用一种四参数的方式表示空间直线,并使用点线特征进行联合图优化估计相机位姿。使用公开数据集和自采集鱼眼影像数据分别进行实验的结果表明,与仅使用点特征的方法相比,该方法可有效改善因相机运动过快产生的跟丢问题,增加轨迹长度,提升位姿估计精度,且生成的稀疏特征地图更能反映场景结构特征。  相似文献   

15.
2D visual servoing consists in using data provided by a vision sensor for controlling the motions of a dynamic system. Most of visual servoing approaches has relied on the geometric features that have to be tracked and matched in the image acquired by the camera. Recent works have highlighted the interest of taking into account the photometric information of the entire image. This approach was tackled with images of perspective cameras. We propose, in this paper, to extend this technique to central cameras. This generalization allows to apply this kind of method to catadioptric cameras and wide field of view cameras. Several experiments have been successfully done with a fisheye camera in order to control a 6 degrees of freedom robot and with a catadioptric camera for a mobile robot navigation task.  相似文献   

16.
This paper presents an adaptive strategy for automatic camera placement in a 3-dimensional space during a robotized vision based quality control. The approach proposed improves the overall efficiency of the system, allowing a correct image acquisition, even in cases where an obstacle along the camera line-of-sight hides the object to be inspected, not making it possible to perform the inspection by template matching. In particular, a strategy for automatically avoiding a possible obstacle is defined.  相似文献   

17.
设计了基于视觉导航的草坪修剪机器人,通过摄像头采集草坪场景的图像,利用基于深度学习的图像分割技术识别草坪已割区域和未割区域,提取区域间的分割线作为机器人的规划路径。系统以STM32F407VET6为主控芯片进行机器人驱动控制系统的设计,以IR2014结合H桥作为底盘电机驱动电路,在视觉识别的区域分割线的导引下进行循迹运动。实验结果表明,该控制系统能够在视觉导航下完成行进运动、避障和割草作业。  相似文献   

18.
Improper camera orientation produces convergent vertical lines (keystone distortion) and skewed horizon lines (horizon distortion) in digital pictures; an a-posteriori processing is then necessary to obtain appealing pictures. We show here that, after accurate calibration, the camera on-board accelerometer can be used to automatically generate an alternative perspective view from a virtual camera, leading to images with residual keystone and horizon distortions that are essentially imperceptible at visual inspection. Furthermore, we describe the uncertainty on the position of each pixel in the corrected image with respect to the accelerometer noise. Experimental results show a similar accuracy for a smartphone and for a digital reflex camera. The method can find application in customer imaging devices as well as in the computer vision field, especially when reference vertical and horizontal features are not easily detectable in the image.  相似文献   

19.
This article describes a framework that fuses vision and force feedback for the control of highly deformable objects. Deformable active contours, or snakes, are used to visually observe changes in object shape over time. Finite‐element models of object deformations are used with force feedback to predict expected visually observed deformations. Our approach improves the performance of large, complex deformable contour tracking over traditional computer vision tracking techniques. This same approach of combining deformable active contours with finite‐element material models is modified so that a vision sensor, i.e., a charge‐coupled device (CCD) camera, can be used as a force sensor. By visually tracking changes in contours on the object, material deflections can be transformed into applied stress estimates through finite element modeling. Therefore, internal object stresses due to object manipulation can be visually observed and controlled, thus creating a framework for deformable object manipulation. A pinhole camera model is used to accomplish vision and force sensor feedback assimilation from these two sensing modalities during manipulation. © 2001 John Wiley & Sons, Inc.  相似文献   

20.
柔性三维坐标测量在工业现场测量有较多应用.分析多目标点透视成像原理的基础上,提出了点阵测头成像视觉坐标测量的概念,给出系统完整的三种数学模型及求解方法.设计了一种实用测头,基于单台面阵CCD摄像机实现由三种类型测头组成的视觉测量样机.在距离摄像机1 000 mm,样机进行实际测量,并与三坐标测量机的检测结果进行比对.测试结果表明在摄像机成像面平行的两个方向的测量精度高于摄像机轴向测量精度.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号