首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper we consider the problem of estimating the range information of features on an affine plane in by observing its image with the aid of a CCD camera, wherein we assume that the camera is undergoing a known motion. The features considered are points, lines and planar curves located on planar surfaces of static objects. The dynamics of the moving projections of the features on the image plane have been described as a suitable differential equation on an appropriate feature space. This dynamics is used to estimate feature parameters from which the range information is readily available. In this paper the proposed identification has been carried out via a newly introduced identifier based observer. Performance of the observer has been studied via simulation.  相似文献   

2.
Sensor fusion of a camera and laser range finder is important for the autonomous navigation of mobile robots. Finding the transformation between the camera and laser range finder is the first necessary step for the fusion of information. Many algorithms have been proposed, but these tend to require many different steps in order to achieve reliable and accurate results. A calibration structure that has triangular hole on its plane is proposed for the extrinsic calibration of a camera and laser range finder. Locations of laser scan data that are invisible on the calibration plane can be determined using property on the proposed calibration structure. First, we classify the laser scan data into two groups where one is on the plane and the other is off the plane. Then, we determine the absolute location of the laser scan data on the plane through a search of the parameters of the line. Finally, we can establish 3D-3D correspondences between the camera and laser range finder. Extrinsic calibration between a camera and laser range finder is found using a conventional 3D-3D transformation computing algorithm. Keywords: Calibration k]camera k]extrinsic calibration k]laser range finder  相似文献   

3.
The problem of identifying motion and shape parameters of a planar object undergoing a Riccati motion, from the associated optical flow generated on the image plane of a single CCD camera, is studied. The optical flow is generated by projecting feature points on the object onto the image plane via perspective and orthographic projections. An important result we show is that, under perspective projection, the parameters of a specific Riccati dynamics that extend the well-known “rigid motion” can be identified up to choice of a sign. The paper also discusses other Riccati equations obtained from quadratic extension of a rigid motion and affine motion. For each of the various motion models considered and for each of the two projection models, we show that the extent to which motion and shape parameters can be recovered from optical flow can in fact be recovered from the linear approximation of the optical flow. We also extend our analysis to a pair of cameras  相似文献   

4.
We propose a method to determine camera parameters for character motion, which considers the motion by itself. The basic idea is to approximately compute the area swept by the motion of the character’s links that are orthogonally projected onto the image plane, which we call “motion area”. Using the motion area, we can determine good fixed camera parameters and camera paths for a given character motion in the off-line or real-time camera control. In our experimental results, we demonstrate that our camera path generation algorithms can compute a smooth moving camera path while the camera effectively displays the dynamic features of character motion. Our methods can be easily used in combination with the method for generating occlusion-free camera paths. We expect that our methods can also be utilized by the general camera planning method as one of heuristics for measuring the visual quality of the scenes that include dynamically moving characters.  相似文献   

5.
基于激光笔的远程人机交互技术   总被引:3,自引:0,他引:3       下载免费PDF全文
当今投影仪已经广泛地应用于教学和会议之中,可是,传统的基于键盘和鼠标的人机交互仍然将演讲者限制在计算机设备旁边,从而给演讲者带来了不便.为解决此问题,提出了一种新的基于激光笔的远程人机交互系统.该系统只需要添加一个摄像机和视频采集卡就可以实现远程的人机交互,即在进行交互时,首先用一个摄像机实时拍摄投影屏,然后在得到的图中,检测激光点区域,并且跟踪识别它的轨迹,再将识别的结果作为用户输入,提供给计算机系统.与以往的类似系统相比,该系统的新颖之处有:在使用前即可训练出关于环境和激光点的特征,以用于增强系统的适应性;由于可通过融合多种信息(比如颜色、运动以及形状信息)来检测激光点,因而增强了系统的鲁棒性.实验结果表明,该系统具有较强的可用性(鲁棒性)和适应性.  相似文献   

6.
Uncalibrated obstacle detection using normal flow   总被引:2,自引:0,他引:2  
This paper addresses the problem of obstacle detection for mobile robots. The visual information provided by a single on-board camera is used as input. We assume that the robot is moving on a planar pavement, and any point lying outside this plane is treated as an obstacle. We address the problem of obstacle detection by exploiting the geometric arrangement between the robot, the camera, and the scene. During an initialization stage, we estimate an inverse perspective transformation that maps the image plane onto the horizontal plane. During normal operation, the normal flow is computed and inversely projected onto the horizontal plane. This simplifies the resultant flow pattern, and fast tests can be used to detect obstacles. A salient feature of our method is that only the normal flow information, or first order time-and-space image derivatives, is used, and thus we cope with the aperture problem. Another important issue is that, contrasting with other methods, the vehicle motion and intrinsic and extrinsic parameters of the camera need not be known or calibrated. Both translational and rotational motion can be dealt with. We present motion estimation results on synthetic and real-image data. A real-time version implemented on a mobile robot, is described.  相似文献   

7.
This paper presents low computational-complexity methods for micro-aerial-vehicle localization in GPS-denied environments. All the presented algorithms rely only on the data provided by a single onboard camera and an Inertial Measurement Unit (IMU). This paper deals with outlier rejection and relative-pose estimation. Regarding outlier rejection, we describe two methods. The former only requires the observation of a single feature in the scene and the knowledge of the angular rates from an IMU, under the assumption that the local camera motion lies in a plane perpendicular to the gravity vector. The latter requires the observation of at least two features, but it relaxes the hypothesis on the vehicle motion, being therefore suitable to tackle the outlier detection problem in the case of a 6DoF motion. We show also that if the camera is rigidly attached to the vehicle, motion priors from the IMU can be exploited to discard wrong estimations in the framework of a 2-point-RANSAC-based approach. Thanks to their inherent efficiency, the proposed methods are very suitable for resource-constrained systems. Regarding the pose estimation problem, we introduce a simple algorithm that computes the vehicle pose from the observation of three point features in a single camera image, once that the roll and pitch angles are estimated from IMU measurements. The proposed algorithm is based on the minimization of a cost function. The proposed method is very simple in terms of computational cost and, therefore, very suitable for real-time implementation. All the proposed methods are evaluated on both synthetic and real data.  相似文献   

8.
Knowledge of altitude, attitude and motion is essential for an Unmanned Aerial Vehicle during critical maneuvers such as landing and take-off. In this paper we present a hybrid stereoscopic rig composed of a fisheye and a perspective camera for vision-based navigation. In contrast to classical stereoscopic systems based on feature matching, we propose methods which avoid matching between hybrid views. A?plane-sweeping approach is proposed for estimating altitude and detecting the ground plane. Rotation and translation are then estimated by decoupling: the fisheye camera contributes to evaluating attitude, while the perspective camera contributes to estimating the scale of the translation. The motion can be estimated robustly at the scale, thanks to the knowledge of the altitude. We propose a robust, real-time, accurate, exclusively vision-based approach with an embedded C++ implementation. Although this approach removes the need for any non-visual sensors, it can also be coupled with an Inertial Measurement Unit.  相似文献   

9.
We consider the stratified self-calibration (affine and metric reconstruction) problem from images acquired with a camera with unchanging internal parameters undergoing circular motion. The general stratified method (modulus constraints) is known to fail with this motion. In this paper we give a novel constraint on the plane at infinity in projective reconstruction for circular motion, the constant inter-frame motion constraint on the plane at infinity between every two adjacent views and a fixed view of the motion sequences, by making use of the facts that in many commercial systems rotation angles are constant. An initial solution can be obtained by using the first three views of the sequence, and Stratified Iterative Particle Swarm Optimization (SIPSO) is proposed to get an accurate and robust solution when more views are at hand. Instead of using the traditional optimization algorithm as the last step to obtain an accurate solution, in this paper, the whole motion sequence information is exploited before computing the camera calibration matrix, this results in a more accurate and robust solution. Once the plane at infinity is identified, the calibration matrices of the camera and a metric reconstruction can be readily obtained. Experiments on both synthetic and real image sequence are given, showing the accuracy and robustness of the new algorithm.  相似文献   

10.
线性确定无穷远平面的单应矩阵和摄象机自标定   总被引:10,自引:0,他引:10  
引入了一种新的对无穷远平面的单应性矩阵(The infinite homography)的约束方程并 据此提出了一种新的摄象机线性自标定算法.与文献中已有的方法相比,该方法对摄象机的运 动要求不苛刻(如不要求摄象机的运动为正交运动),只须摄象机作一次平移运动和两次任意刚 体运动,就可线性唯一确定内参数.该方法主要优点在于:在确定无穷远平面的单应性矩阵的过 程中,不需要射影重构,也不需要有限远平面信息,唯一所需要的信息是图象极点,从而简化了 文献中现有的算法.另外同时给出了由极点确定(运动组)关于无穷远平面单应性矩阵的充分必 要条件.模拟实验和实际图象实验验证了该方法的正确性和可行性.  相似文献   

11.
An approach based on fuzzy logic for matching both articulated and non-articulated objects across multiple non-overlapping field of views (FoVs) from multiple cameras is proposed. We call it fuzzy logic matching algorithm (FLMA). The approach uses the information of object motion, shape and camera topology for matching objects across camera views. The motion and shape information of targets are obtained by tracking them using a combination of ConDensation and CAMShift tracking algorithms. The information of camera topology is obtained and used by calculating the projective transformation of each view with the common ground plane. The algorithm is suitable for tracking non-rigid objects with both linear and non-linear motion. We show videos of tracking objects across multiple cameras based on FLMA. From our experiments, the system is able to correctly match the targets across views with a high accuracy.  相似文献   

12.
This paper focuses on the way to achieve accurate visual servoing tasks when the shape of the object being observed as well as the desired image are unknown. More precisely, we want to control the camera orientation with respect to the tangent plane at a certain object point corresponding to the center of a region of interest. We also want to observe this point at the principal point to fulfil a fixation task. A 3-D reconstruction phase must, therefore, be performed during the camera motion. Our approach is then close to the structure-from-motion problem. The reconstruction phase is based on the measurement of the 2-D motion in a region of interest and on the measurement of the camera velocity. Since the 2-D motion depends on the shape of the objects being observed, we introduce a unified motion model to cope both with planar and nonplanar objects. However, since this model is only an approximation, we propose two approaches to enlarge its domain of validity. The first is based on active vision, coupled with a 3-D reconstruction based on a continuous approach, and the second is based on statistical techniques of robust estimation, coupled with a 3-D reconstruction based on a discrete approach. Theoretical and experimental results compare both approaches.  相似文献   

13.
Image blur caused by object motion attenuates high frequency content of images, making post‐capture deblurring an ill‐posed problem. The recoverable frequency band quickly becomes narrower for faster object motion as high frequencies are severely attenuated and virtually lost. This paper proposes to translate a camera sensor circularly about the optical axis during exposure, so that high frequencies can be preserved for a wide range of in‐plane linear object motion in any direction within some predetermined speed. That is, although no object may be photographed sharply at capture time, differently moving objects captured in a single image can be deconvolved with similar quality. In addition, circular sensor motion is shown to facilitate blur estimation thanks to distinct frequency zero patterns of the resulting motion blur point‐spread functions. An analysis of the frequency characteristics of circular sensor motion in relation to linear object motion is presented, along with deconvolution results for photographs captured with a prototype camera.  相似文献   

14.
摄像机自标定的线性理论与算法   总被引:14,自引:2,他引:12  
吴福朝  胡占义 《计算机学报》2001,24(11):1121-1135
文中提出一种新的摄像机线性自标定的算法和理论。与文献中已有的方法相比,该文方法的主要优点是对摄像机的运动要求不苛刻,如不要求摄像机的运动为正交运动。该方法的关键步骤是确定无穷远平面的单应性矩阵(Homography)。文中从理论上严格证明了下述结论:摄像机作两组运动参数未知的运动M1={(R1,t^11),(R1,t^12)},M2={(R2,t^21),(R2,t^22)},若下述两个条件满足:(1)T1={t^11,t^12},T2={t^21,t^22}是两个线性无关组(即本组内的两个平移向量线性无关);(2)R1,R2的旋转轴不同,则可线性地唯一确定摄像机的内参矩阵和运动参数。另外,在四参数摄像机模型下,严格证明了一组运动可线性地唯一确定摄像机的内参数矩阵和运动参数。模拟实验和实际图像实验验证了本文方法的正确性和可行性。  相似文献   

15.
In computer vision, occlusions are almost always seen as undesirable singularities that pose difficult challenges to image motion analysis problems, such as optic flow computation, motion segmentation, disparity estimation, or egomotion estimation. However, it is well known that occlusions are extremely powerful cues for depth or motion perception, and could be used to improve those methods.

In this paper, we propose to recover camera motion information based uniquely on occlusions, by observing two specially useful properties: occlusions are independent of the camera rotation, and reveal direct information about the camera translation.

We assume a monocular observer, undergoing general rotational and translational motion in a static environment. We present a formal model for occlusion points and develop a method suitable for occlusion detection. Through the classification and analysis of the detected occlusion points, we show how to retrieve information about the camera translation (FOE). Experiments with real images are presented and discussed in the paper.  相似文献   


16.
Our obstacle detection method is applicable to deliberative translation motion of a mobile robot and, in such motion, the epipole of each image of an image pair is coincident and termed the focus of expansion (FOE). We present an accurate method for computing the FOE and then we use this to apply a novel rectification to each image, called a reciprocal-polar (RP) rectification. When robot translation is parallel to the ground, as with a mobile robot, ground plane image motion in RP-space is a pure shift along an RP image scan line and hence can be recovered by a process of 1D correlation, even over large image displacements and without the need for corner matches. Furthermore, we show that the magnitude of these shifts follows a sinusoidal form along the second (orientation) dimension of the RP image. This gives the main result that ground plane motion over RP image space forms a 3D sinusoidal manifold. Simultaneous ground plane pixel grouping and recovery of the ground plane motion thus amounts to finding the FOE and then robustly fitting a 3D sinusoid to shifts of maximum correlation in RP space. The phase of the recovered sinusoid corresponds to the orientation of the vanishing line of the ground plane and the amplitude is related to the magnitude of the robot/camera translation. Recovered FOE, vanishing line and sinusoid amplitude fully define the ground plane motion (homography) across a pair of images and thus obstacles and ground plane can be segmented without any explicit knowledge of either camera parameters or camera motion.  相似文献   

17.
In this paper we show how to carry out an automatic alignment of a pan-tilt camera platform with its natural coordinate frame, using only images obtained from the cameras during controlled motion of the unit. An active camera in aligned orientation represents the zero position for each axis, and allows axis odometry to be referred to a fixed reference frame; such referral is otherwise only possible using mechanical means, such as end-stops, which cannot take account of the unknown relationship between the camera coordinate frame and its mounting. The algorithms presented involve the calculation of two-view transformations (homographies or epipolar geometry) between pairs of images related by controlled rotation about individual head axes. From these relationships, which can be calculated linearly or optimised iteratively, an invariant line to the motion can be extracted which represents an aligned viewing direction. We present methods for general and degenerate motion (translating or non-translating), and general and degenerate scenes (non-planar and planar, but otherwise unknown), which do not require knowledge of the camera calibration, and are resistant to lens distortion non-linearity.Detailed experimentation in simulation, and in real scenes, demonstrate the speed, accuracy, and robustness of the methods, with the advantages of applicability to a wide range circumstances and no need to involve calibration objects or complex motions. Accuracy of within half a degree can be achieved with a single motion, and we also show how to improve on this by incorporating images from further motions, using a natural extension of the basic algorithm.  相似文献   

18.
We address and solve the self-calibration of a generic camera that performs planar motion while viewing (part of) a ground plane. Concretely, assuming initial sets of correspondences between several images of the ground plane as known, we are interested in determining both the camera motion and the geometry of the ground plane. The latter is obtained through the rectification of the image of the ground plane, which gives a bijective correspondence between pixels and points on the ground plane.  相似文献   

19.
一种适用于MPEG-4形状编码的快速运动估计算法   总被引:2,自引:0,他引:2  
倪伟  郭宝龙 《计算机科学》2005,32(7):128-130
运动估计是MPEG-4形状编码的一项关键技术,本文提出了一种适用于形状编码的快速运动估计算法。算法首先在参考帧中进行扫描,得出视频对象的二值边界掩模;在匹配运算时使用lbit的异或运算代替原有的加法运算;设定有效的中止准则,对于静止点直接中止搜索;在搜索过程中采用了渐进消除算法,能够在不影响搜索精度的前提下减少搜索点。实验结果表明使用本文的快速搜索算法,运动估计中的运算量比MPEG-4 VM原有搜索算法有较大幅度的降低,且编码后的码字长度与原算法基本一致。  相似文献   

20.
为了在单目摄像机变焦情况下测量其自运动参数,提出一种单目变焦摄像机自运动的参数标定测量法。在飞行平台着陆过程中,固连其上的单目俯视摄像机对包含已知世界坐标的特征点的静态着陆平面进行连续拍摄,该方法利用单帧图像可解算得到摄像机拍摄当时的等效焦距及其相对于着陆平面的6自由度位置,结合多帧信息即可对摄像机自运动的运动速度进行估计,进而转换为飞行平台的着陆运动参数。实验结果证明该方法可行有效。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号