首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
基于平面的便携式结构光系统柔性标定方法   总被引:2,自引:0,他引:2  
高伟  王亮  胡占义 《自动化学报》2008,34(11):1358-1362
对于便携式结构光系统, 该系统必须适合方便使用. 因此, 针对该系统的标定方法也必须采用方便且便宜的设备, 诸如具有两个或三个正交平面或者需要额外固定措施的一些设备则不宜采用. 同时针对快速三维获取的目的, 应该采用估计便携式结构光系统投影矩阵的方法. 本文针对便携式结构光系统的上述应用需求, 提出了一种基于平面的便携式结构光系统柔性标定方法. 该方法需要一个标定模板和一个参考模板. 标定模板被固定在一个平面上, 而参考模板则通过LCD投影仪被投射到该平面上. 通过交比和极几何约束, 可以获得分别对投影仪和相机的世界坐标系和图像坐标系的点对应坐标. 通过这些点对应坐标, 即可完成整个系统的标定. 实验结果表明该方法具有很高的准确性和鲁棒性.  相似文献   

3.
This paper presents a new system for rapidly acquiring complete 3-D surface models using a single orthographic structured light projector, a pair of planar mirrors, and one or more synchronized cameras. Using the mirrors, we project structured light patterns that illuminate the object from all sides (not just the side of the projector) and are able to observe the object from several vantage points simultaneously. This system requires that projected planes of light to be parallel, so we construct an orthographic projector using a Fresnel lens and a commercial DLP projector. A single Gray code sequence is used to encode a set of vertically-spaced light planes within the scanning volume, and five views of the illuminated object are obtained from a single image of the planar mirrors located behind it. From each real and virtual camera we recover a dense 3-D point cloud spanning the entire object surface using traditional structured light algorithms. A key benefit of this design is to ensure that each point on the object surface can be assigned an unambiguous Gray code sequence, despite the possibility of being illuminated from multiple directions. In addition to presenting a prototype implementation, we also develop a complete set of mechanical alignment and calibration procedures for utilizing orthographic projectors in computer vision applications. As we demonstrate, the proposed system overcomes a major hurdle to achieving full 360° reconstructions using a single structured light sequence by eliminating the need for merging multiple scans or multiplexing several projectors.  相似文献   

4.
This paper presents a novel approach for matching 2-D points between a video projector and a digital camera. Our method is motivated by camera–projector applications for which the projected image needs to be warped to prevent geometric distortion. Since the warping process often needs geometric information on the 3-D scene obtained from a triangulation, we propose a technique for matching points in the projector to points in the camera based on arbitrary video sequences. The novelty of our method lies in the fact that it does not require the use of pre-designed structured light patterns as is usually the case. The backbone of our application lies in a function that matches activity patterns instead of colors. This makes our method robust to pose, severe photometric and geometric distortions. It also does not require calibration of the color response curve of the camera–projector system. We present quantitative and qualitative results with synthetic and real-life examples, and compare the proposed method with the scale invariant feature transform (SIFT) method and with a state-of-the-art structured light technique. We show that our method performs almost as well as structured light methods and significantly outperforms SIFT when the contrast of the video captured by the camera is degraded.  相似文献   

5.
Coded structured light is considered one of the most reliable techniques for recovering the surface of objects. This technique is based on projecting a light pattern and viewing the illuminated scene from one or more points of view. Since the pattern is coded, correspondences between image points and points of the projected pattern can be easily found. The decoded points can be triangulated and 3D information is obtained. We present an overview of the existing techniques, as well as a new and definitive classification of patterns for structured light sensors. We have implemented a set of representative techniques in this field and present some comparative results. The advantages and constraints of the different patterns are also discussed.  相似文献   

6.
线结构光传感器结构参数标定的新方法   总被引:2,自引:0,他引:2  
段发阶  刘凤梅  叶声华 《机器人》1998,20(6):460-464
本文提出了一种简单、快速、高精度的线结构光传感器结构参数的标定方法.为求得光平面和摄像机间的位置关系,设计了一种简单、实用的梯形靶标,以测量光平面上点的坐标,并采用罚函数约束求解参数.该方法对可见光和不可见光的场合均适用.实验证明,这种方法具有较高精度.  相似文献   

7.
Catadioptric camera calibration using geometric invariants   总被引:5,自引:0,他引:5  
Central catadioptric cameras are imaging devices that use mirrors to enhance the field of view while preserving a single effective viewpoint. In this paper, we propose a novel method for the calibration of central catadioptric cameras using geometric invariants. Lines and spheres in space are all projected into conics in the catadioptric image plane. We prove that the projection of a line can provide three invariants whereas the projection of a sphere can only provide two. From these invariants, constraint equations for the intrinsic parameters of catadioptric camera are derived. Therefore, there are two kinds of variants of this novel method. The first one uses projections of lines and the second one uses projections of spheres. In general, the projections of two lines or three spheres are sufficient to achieve catadioptric camera calibration. One important conclusion in this paper is that the method based on projections of spheres is more robust and has higher accuracy than that based on projections of lines. The performances of our method are demonstrated by both the results of simulations and experiments with real images.  相似文献   

8.
Real-time structured light coding for adaptive patterns   总被引:2,自引:0,他引:2  
Coded structured light is a technique that allows the 3D reconstruction of poorly or non-textured scene areas. With the codes uniquely associated with visual primitives of the projected pattern, the correspondence problem is quickly solved by means of local information only, with robustness against disturbances like high surface curvatures, partial occlusions, out-of-field of view or out-of-focus. Real-time 3D reconstruction with one shot is possible with pseudo-random arrays, where the encoding is done in a single pattern using spatial neighbourhood. To correct more mismatched visual primitives and to get patterns globally more robust, a higher Hamming distance between all the used codewords should be suited. Recent works in the structured light field have shown a growing interest for adaptive patterns. These can account for geometrical or spectral specificities of the scene to provide better features matching and reconstructions. Up till today, such patterns cannot benefit from the robustness offered by spatial neighbourhood coding with a minimal Hamming distance constraint, because the existing algorithms for such a class of coding are designed with an offline coding only. In this article, we show that due to two new contributions, a mixed exploration/exploitation search behaviour and a O(n 2) to ~O(n) complexity reduction using the epipolar constraint, the real-time coding of patterns having similar properties than those coded offline can be achieved. This allows to design a complete closed-loop processing pipeline for adaptive patterns.  相似文献   

9.
提出了一种单视三维重构方法,该方法是利用用户提供图像点及其对应的三维点之间几何信息。由于结构场景是由大量平面构成的,存在大量的平行性、正交性约束,因此该方法主要应用于结构场景的三维重构。首先,相机定标和计算每个平面的度量信息,即先基于3组互相垂直方向的影灭点,对方形像素相机标定,再利用影灭线和圆环点像,对每个平面度量校正;然后考虑每个校正平面的尺度因子和非正交平面间的相对面向,从而将所有校正后的平面缝合起来。采用真实图像进行实验,实验结果表明,该方法简单易用。  相似文献   

10.
This paper concerns the incorporation of geometric information in camera calibration and 3D modeling. Using geometric constraints enables more stable results and allows us to perform tasks with fewer images. Our approach is motivated and developed within a framework of semi-automatic 3D modeling, where the user defines geometric primitives and constraints between them. In this paper, first a duality that exists between the shape parameters of a parallelepiped and the intrinsic parameters of a camera is described. Then, a factorization-based algorithm exploiting this relation is developed. Using images of parallelepipeds, it allows us to simultaneously calibrate cameras, recover shapes of parallelepipeds, and estimate the relative pose of all entities. Besides geometric constraints expressed via parallelepipeds, our approach simultaneously takes into account the usual self-calibration constraints on cameras. The proposed algorithm is completed by a study of the singular cases of the calibration method. A complete method for the reconstruction of scene primitives that are not modeled by parallelepipeds is also briefly described. The proposed methods are validated by various experiments with real and simulated data, for single-view as well as multiview cases.  相似文献   

11.
一种反射折射摄像机的简易标定方法   总被引:3,自引:0,他引:3  
Central catadioptric cameras are widely used in virtual reality and robot navigation, and the camera calibration is a prerequisite for these applications. In this paper, we propose an easy calibration method for central catadioptric cameras with a 2D calibration pattern. Firstly, the bounding ellipse of the catadioptric image and field of view (FOV) are used to obtain the initial estimation of the intrinsic parameters. Then, the explicit relationship between the central catadioptric and the pinhole model is used to initialize the extrinsic parameters. Finally, the intrinsic and extrinsic parameters are refined by nonlinear optimization. The proposed method does not need any fitting of partial visible conic, and the projected images of 2D calibration pattern can easily cover the whole image, so our method is easy and robust. Experiments with simulated data as well as real images show the satisfactory performance of our proposed calibration method.  相似文献   

12.
Central catadioptric cameras are widely used in virtual reality and robot navigation,and the camera calibration is a prerequisite for these applications.In this paper,we propose an easy calibration method for central catadioptric cameras with a 2D calibration pattern.Firstly,the bounding ellipse of the catadioptric image and field of view (FOV) are used to obtain the initial estimation of the intrinsic parameters.Then,the explicit relationship between the central catadioptric and the pinhole model is used to initialize the extrinsic parameters.Finally,the intrinsic and extrinsic parameters are refined by nonlinear optimization.The proposed method does not need any fitting of partial visible conic,and the projected images of 2D calibration pattern can easily cover the whole image,so our method is easy and robust.Experiments with simulated data as well as real images show the satisfactory performance of our proposed calibration method.  相似文献   

13.
Two relevant issues in vision-based navigation are the field-of-view constraints of conventional cameras and the model and structure dependency of standard approaches. A good solution of these problems is the use of the homography model with omnidirectional vision. However, a plane of the scene will cover only a small part of the omnidirectional image, missing relevant information across the wide range field of view, which is the main advantage of omnidirectional sensors. The interest of this paper is in a new approach for computing multiple homographies from virtual planes using omnidirectional images and its application in an omnidirectional vision-based homing control scheme. The multiple homographies are robustly computed, from a set of point matches across two omnidirectional views, using a method that relies on virtual planes independently of the structure of the scene. The method takes advantage of the planar motion constraint of the platform and computes virtual vertical planes from the scene. The family of homographies is also constrained to be embedded in a three-dimensional linear subspace to improve numerical consistency. Simulations and real experiments are provided to evaluate our approach.  相似文献   

14.
Central catadioptric cameras are imaging devices that use mirrors to enhance the field of view while preserving a single effective viewpoint. Lines and spheres in space are all projected into conics in the central catadioptric image planes, and such conics are called line images and sphere images, respectively. We discovered that there exists an imaginary conic in the central catadioptric image planes, defined as the modified image of the absolute conic (MIAC), and by utilizing the MIAC, the novel identical projective geometric properties of line images and sphere images may be exploited: Each line image or each sphere image is double-contact with the MIAC, which is an analogy of the discovery in pinhole camera that the image of the absolute conic (IAC) is double-contact with sphere images. Note that the IAC also exists in the central catadioptric image plane, but it does not have the double-contact properties with line images or sphere images. This is the main reason to propose the MIAC. From these geometric properties with the MIAC, two linear calibration methods for central catadioptric cameras using sphere images as well as using line images are proposed in the same framework. Note that there are many linear approaches to central catadioptric camera calibration using line images. It seems that to use the properties that line images are tangent to the MIAC only leads to an alternative geometric construction for calibration. However, for sphere images, there are only some nonlinear calibration methods in literature. Therefore, to propose linear methods for sphere images may be the main contribution of this paper. Our new algorithms have been tested in extensive experiments with respect to noise sensitivity.  相似文献   

15.
This paper proposes a basic structured light system for pose estimation. It consists of a circular laser pattern and a camera rigidly attached to the laser source. We develop a geometric modeling that allows to efficiently estimate the pose at scale of the system, relative to a reference plane onto which the pattern is projected. Three different robust estimation strategies, including two minimal solutions, are also presented with this geometric formulation. Synthetic and real experiments are performed for a complete evaluation, both quantitatively and qualitatively, according to different scenarios and environments. We also show that the system can be embedded for UAV experiments.  相似文献   

16.
Reconstruction from structured light can be greatly affected by indirect illumination such as interreflections between surfaces in the scene and sub-surface scattering. This paper introduces band-pass white noise patterns designed specifically to reduce the effects of indirect illumination, and still be robust to standard challenges in scanning systems such as scene depth discontinuities, defocus and low camera-projector pixel ratio. While this approach uses unstructured light patterns that increase the number of required projected images, it is up to our knowledge the first method that is able to recover scene disparities in the presence of both indirect illumination and scene discontinuities. Furthermore, the method does not require calibration (geometric nor photometric) or post-processing such as phase unwrapping or interpolation from sparse correspondences. We show results for a few challenging scenes and compare them to correspondences obtained with the Phase-shift method and the recently introduced method by Gupta et al., designed specifically to handle indirect illumination.  相似文献   

17.
该文提出了一种基于空间编码图案的结构光系统参数标定方法。与传统的基于棋盘格图案标 定策略不同的是,该文采用结构光编码图案实现了系统的高精度标定,具体实施步骤包括:(1)根据编码图案的几何分布特性提出了一种编码特征点检测算子,基于检测出的编码特征点构建拓扑结构,利 用仿射变换原理及双线性插值算法提取出编码几何元素图像;(2)将几何元素识别转化为监督分类问题,通过采集大量训练样本训练卷积神经网络,实现编码元素的准确识别和解码过程;(3)利用射影变 换原理建立相机像平面与投影机像平面之间的对应关系,利用此对应关系将标定板上棋盘格角点在相机像平面上的坐标转换至投影机像平面,最终实现了对相机和投影仪内外部参数的同时标定。标定结果显示,该方法对投影仪的标定重投影误差不超过 0.3 像素;三维重建实验结果显示,与传统标定方法相比,该文方法能够显著提升系统的标定和三维重建精度。  相似文献   

18.
In applications of augmented reality like virtual studio TV production, multisite video conference applications using a virtual meeting room and synthetic/natural hybrid coding according to the new ISO/MPEG-4 standard, a synthetic scene is mixed into a natural scene to generate a synthetic/natural hybrid image sequence. For realism, the illumination in both scenes should be identical. In this paper, the illumination of the natural scene is estimated automatically and applied to the synthetic scene. The natural scenes are restricted to scenes with nonoccluding, simple, moving, mainly rigid objects. For illumination estimation, these natural objects are automatically segmented in the natural image sequence and three-dimensionally (3-D) modeled using ellipsoid-like models. The 3-D shape, 3-D motion, and the displaced frame difference between two succeeding images are evaluated to estimate three illumination parameters. The parameters describe a distant point light source and ambient light. Using the estimated illumination parameters, the synthetic scene is rendered and mixed to the natural image sequence. Experimental results with a moving virtual object mixed into real video telephone sequences show that the virtual object appears naturally having the same shading and shadows as the real objects. Further, shading and shadow allows the viewer to understand the motion trajectory of the objects much better  相似文献   

19.
Occlusions as a guide for planning the next view   总被引:5,自引:0,他引:5  
A strategy for acquiring 3-D data of an unknown scene, using range images obtained by a light stripe range finder is addressed. The foci of attention are occluded regions, i.e., only the scene at the borders of the occlusions is modeled to compute the next move. Since the system has knowledge of the sensor geometry, it can resolve the appearance of occlusions by analyzing them. The problem of 3-D data acquisition is divided into two subproblems due to two types of occlusions. An occlusion arises either when the reflected laser light does not reach the camera or when the directed laser light does not reach the scene surface. After taking the range image of a scene, the regions of no data due to the first kind of occlusion are extracted. The missing data are acquired by rotating the sensor system in the scanning plane, which is defined by the first scan. After a complete image of the surface illuminated from the first scanning plane has been built, the regions of missing data due to the second kind of occlusions are located. Then, the directions of the next scanning planes for further 3-D data acquisition are computed  相似文献   

20.
A method for the calibration of a 3-D laser scanner   总被引:1,自引:0,他引:1  
The calibration of a three-dimensional digitizer is a very important issue to take into consideration that good quality, reliability, accuracy and high repeatability are the features which a good digitizer is expected to have. The aim of this paper is to propose a new method for the calibration of a 3-D laser scanner, mainly for robotic applications. The acquisition system consists of a laser emitter and a webcam with fixed relative positions. In addition, a cylindrical lens is provided with the laser housing so that it is capable to project a plane light. An optical filter was also used in order to segment the laser stripe from the rest of the scene. For the calibration procedure it was used a digital micrometer that move a target with known dimensions. The calibration method is based on the modeling of the geometrical relationship between the 3-D coordinates of the laser stripe on the target and its digital coordinates in the image plane. By this method it is possible to calibrate the intrinsic parameters of the video system, the position of the image plane and the laser plane in a given frame, all in the same time.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号