首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 218 毫秒
1.
鱼眼投影在虚拟实景中的应用研究   总被引:2,自引:0,他引:2  
为了生成大视野的虚拟场景扣逼真的模拟球面,本文把鱼眼投影应用到虚拟实景中,给出由经纬映射图像生成角鱼眼投影的算法,并且在实现了三维浏览扣缩放,最后通过实例证明了鱼眼投影虚拟空间的效果.  相似文献   

2.
一种基于球面透视投影约束的鱼眼镜头校正方法   总被引:21,自引:0,他引:21  
英向华  胡占义 《计算机学报》2003,26(12):1702-1708
鱼眼镜头摄像机具有较大视场,但是,使用鱼眼摄像机拍摄的图像会有非常严重的变形.该文研究基于球面透视投影约束的鱼眼镜头校正方法.球面透视投影约束是指空间直线的球面透视投影为球面上的大圆.作者首先使用含有变形校正参数的鱼眼变形校正模型,将空间直线的鱼眼投影曲线上的点映射为球面点,然后通过球面点到大圆的球面距离最小来拟合大圆,恢复了变形校正参数,从而实现了鱼眼图像的校正.模拟实验和真实图像实验表明,该方法能得到比较满意的校正结果.  相似文献   

3.
鱼眼镜头是一种超广角镜头,焦距范围一般为6~16mm,视角可超过180度,一般用来拍摄全景照片。由于鱼眼镜头对景物的桶形弯曲畸变很大(特别是全圆形鱼眼镜头,按捺不住一景时的扭曲失真很厉害),很多景物在照片上都是变形的,所以不能直接用于  相似文献   

4.
一种扩展小孔成像模型的鱼眼相机矫正与标定方法   总被引:1,自引:0,他引:1  
鱼眼相机由于其超宽的视场范围(Field of view,FOV)(可以达到180°以上),得到越来越广泛的应用. 常规的基于小孔成像模型的相机矫正与标定算法在超宽视场的鱼眼成像系统中已经不太适用,为了兼顾小孔模型的特点,本文提出了一种扩展小孔成像模型的鱼眼相机矫正与标定方法. 此方法是对小孔成像模型的进一步拓展,不仅具备小孔模型实现简单、适合人眼视觉效果以及相机标定方便等优点,同时将小孔成像模型适用的视场范围扩展到超宽视场领域. 其基本思路是:在利用小孔成像模型对鱼眼相机90°左右视场范围进行矫正与标定的基础上,使用非等间距的点阵模板,并结合直线拟合以及自然邻点插值算法,扩展小孔模型适用的视场范围. 本文使用鱼眼相机从不同的角度拍摄多幅模板图,完成鱼眼相机的矫正与标定. 通过求取的小孔成像模型参数实现相机的标定;对鱼眼相机拍摄的实际场景图进行畸变矫正测试,结果表明此方法能够很好地矫正鱼眼相机存在的畸变,得到符合人眼视觉效果的矫正图;单幅矫正图视场范围达到130°,结合不同角度拍摄的多幅模板图,可把矫正的视场范围扩展到180°.  相似文献   

5.
进阶RAW格式     
《数码摄影》2009,(9):140-143
这张照片拍摄于日本一座高层酒店的观景窗前,从窗户向外俯瞰,城市的灯火辉煌与河湾的景色非常迷人,为了传达拍摄环境信息.摄影师使用鱼眼镜头,隔着酒店窗户的玻璃拍摄,将窗框和窗台同时收入进来,营造奇妙的框式构图效果。由于天气不够通透.以及窗户玻璃的隔膜.照片的清晰度并不是很高,更由于夜景照片反差较大,照片的暗部和亮部细节都不丰富.我们针对这张照片进行调整。  相似文献   

6.
全景漫游系统是近来出现在Internet上的一种新的交互式的虚拟场景表示方式,它以全景图的方式再现了三维场景,可以用相应的浏览器实现虚拟场景的漫游,具有很好的交换性和真实感。该文就如何利用鱼眼镜头进行图像获取;如何对图像进行拼接制作球状360度全景图;如何构建基于Java applet的漫游系统的方法进行了探索。  相似文献   

7.
针对鱼眼图像畸变大、表示的场景信息不直观等特点,本文提出了一种 基于视点纠正的鱼眼图像场景化漫游方法。该方法以鱼眼图像所表示的半球空间为观察对 象,通过视点的转移在半球空间进行漫游,以对不同区域内的场景信息进行直观的观察。采 用鱼眼图像校正算法为基础建立鱼眼镜头的球面映射模型,通过映射关系对以视点为中心的 观察区域进行实时校正,当视点变化时对校正后的可视区域进行实时显示从而实现漫游。实 验结果表明,通过本文算法能够实现鱼眼图像所表示半球空间的实时漫游,且漫游时显示的 校正图像满足直线约束标准。  相似文献   

8.
基于几何模型的鱼眼图像校正   总被引:2,自引:0,他引:2  
在虚拟现实,机器人导航及视觉监控等领域,需要使用具有较大视角的鱼眼摄像机,这样得到的图像被称为鱼眼图像。鱼眼图像存在严重的变形和失真。本研究利用平面,柱面和球面经纬网格3种几何模型分别进行鱼眼图像校正,实现了真实场景的还原,并对校正结果进行分析和比较。  相似文献   

9.
鱼眼变形立体图像恢复稠密深度图的方法   总被引:8,自引:0,他引:8  
贾云得  吕宏静  刘万春 《计算机学报》2000,23(12):1332-1336
使用普通镜头立体视觉无法实现近距离或大视场的立体感知,鱼眼镜头立体视觉可以解决这一问题,因为鱼眼镜头的视场角可以达到180^0。鱼眼镜头获取大视场的同时也引入了严重的图像变形。文中讨论了一种使用三个鱼眼镜头摄像机构成的多基线立体视沉系统恢重近距离大视场稠密深度图的方法。为了高精度恢复稠密深度图,作者采用图像局部规范化方法、鱼眼变形图像校正方法、基于仿射变换区域匹配的相似性极小化准则,来实现鱼眼变形立体图像对应性求解。文中最后给出了用真实物体和场景的大变形图像来恢复稠密深度图的实验结果。  相似文献   

10.
套头变鱼眼     
《数码摄影》2008,(9):144-145
鱼眼镜头凭借宽广的视角和夸张的变形效果,赋予照片强烈的冲击力。然而,由于高昂的价格和用途单一,很多影友都会对它敬而远之。如今,通过前期的拍摄和专用软件PTGui的使用,即使你只有一只廉价的套头,也可以制作出鱼眼镜头的特殊效果。 软件下载网址:www.onlinedown.net/soft/69668.htm  相似文献   

11.
杨玲  成运 《图学学报》2010,31(6):19
为了消除鱼眼镜头带来的形变,该文提出了一种应用经纬映射的鱼眼图像校正设计方法,推导了消除变形的数学依据,总结出一种不需要任何标定数据,快速的纠正等角鱼眼变形的算法。使用经纬映射图像的校正方法,可以把扭曲的半球鱼眼图像投射为普通照片的四方形状,也即通过投射降低图像的扭曲程度,在视觉上基本达到实用要求。  相似文献   

12.
一种基于照片中纹理重构三维模型的方法   总被引:8,自引:0,他引:8  
杨孟洲  石教英 《软件学报》2000,11(4):502-506
如何从真实世界中获取具有真实感的三维场景模型一直是计算机图形学中的一个难点.该文给出了一种从真实世界的照片中重建三维场景模型的算法.算法根据在空间稀疏分布的不同视点处的真实场景照片中颜色纹理的一致性来建立达到照片级真实程度的三维场景模型,可用于真实世界复杂形体真实感三维模型的建立.  相似文献   

13.
Structure from motion with wide circular field of view cameras   总被引:2,自引:0,他引:2  
This paper presents a method for fully automatic and robust estimation of two-view geometry, autocalibration, and 3D metric reconstruction from point correspondences in images taken by cameras with wide circular field of view. We focus on cameras which have more than 180/spl deg/ field of view and for which the standard perspective camera model is not sufficient, e.g., the cameras equipped with circular fish-eye lenses Nikon FC-E8 (183/spl deg/), Sigma 8 mm-f4-EX (180/spl deg/), or with curved conical mirrors. We assume a circular field of view and axially symmetric image projection to autocalibrate the cameras. Many wide field of view cameras can still be modeled by the central projection followed by a nonlinear image mapping. Examples are the above-mentioned fish-eye lenses and properly assembled catadioptric cameras with conical mirrors. We show that epipolar geometry of these cameras can be estimated from a small number of correspondences by solving a polynomial eigenvalue problem. This allows the use of efficient RANSAC robust estimation to find the image projection model, the epipolar geometry, and the selection of true point correspondences from tentative correspondences contaminated by mismatches. Real catadioptric cameras are often slightly noncentral. We show that the proposed autocalibration with approximate central models is usually good enough to get correct point correspondences which can be used with accurate noncentral models in a bundle adjustment to obtain accurate 3D scene reconstruction. Noncentral camera models are dealt with and results are shown for catadioptric cameras with parabolic and spherical mirrors.  相似文献   

14.
相比普通镜头,鱼眼镜头拥有更大的视场角,甚至可以直接获取半球域的图像信息,在立体视觉领域,应用鱼眼镜头来采集全景图像可减少镜头及图像采集模块数目,简化系统、提高运算速度、降低成本。但同时鱼眼镜头图像也存在一定程度的畸变,越靠近边缘畸变越严重。因此,在光轴正交或是角度更大的立体视觉系统中,进行相关图像的特征点匹配存在困难,直接影响立体视觉系统的应用效果。然而采用一种具有仿射不变性的图像匹配算法即可解决这个问题,首先提取原始图像的MSCR特征区域,其次引进CS-LBP算子对各个MSCR区域进行特征描述,应用特征权重的卡方距离比较法进行唯一匹配,最后进行椭圆拟合及连线标记使得匹配结果可视化。且通过实验验证了此方法的稳定一致性,可应用于大旋转角度的鱼眼图像的特征匹配。  相似文献   

15.
We show that Maxwell's fish-eye lens can make a semi-circular perfect electric conductor look like a circular one. Such an effect can also be achieved (not perfectly) by using negative index metamaterials, but only within a single frequency. Maxwell's fish-eye lens, however, can work for a set of eigenfrequencies. Numerical simulations are performed to verify the effect.  相似文献   

16.
Selecting informative and visually appealing views for 3D indoor scenes is beneficial for the housing, decoration, and entertainment industries. A set of views that exhibit comfort, aesthetics, and functionality of a particular scene can attract customers and facilitate business transactions. However, selecting views for an indoor scene is challenging because the system has to consider not only the need to reveal as much information as possible, but also object arrangements, occlusions, and characteristics. Since there can be many principles utilized to guide the view selection, and various principles to follow under different circumstances, we achieve the goal by imitating popular photos on the Internet. Specifically, we select the view that can optimize the contour similarity of corresponding objects to the photo. Because the selected view can be inadequate if object arrangements in the 3D scene and the photo are different, our system imitates many popular photos and selects a certain number of views. After that, it clusters the selected views and determines the view/cluster centers by the weighted average to finally exhibit the scene. Experimental results demonstrate that the views selected by our method are visually appealing.  相似文献   

17.
计算机视觉通常采用针孔摄像机模型,但对于存在较大畸变的鱼眼镜头或广角镜头来说,会造成图像中同时存在透视变形和像差畸变。解决此问题的传统方法一般是采用标准网格板来标定摄像机参数,但需要较多的已知信息。为了进行精确的标定,提出了一种新的标定方法,该新方法不需要任何空间3维信息,即可用单幅普通图像来标定摄像机的像差系数及内参数,并可将畸变图像校正到相似变换。为了纠正像差畸变和计算消影点,该方法采用了直线的射影不变性,即共线点的投影仍然共线,平行直线束的投影相交于一点的性质;为了纠正透视变形,还采用了直线的相似不变性,即正交直线的夹角在相似变换中仍然保持正交的性质。用该方法标定的摄像机的参数包括像差系数、焦距、主点和纵横比,同时将图像纠正到了相似变换。用实验室图像和室外图像进行了仿真实验都得到了精确、可靠的结果。  相似文献   

18.
This article presents a system for the automatic measurement and modelling of sewer pipes. The system recovers the interior shape of a sewer pipe from a video sequence which is acquired by a fish-eye lens camera moving inside the pipe. The approach is based on tracking interest points across successive video frames and posing the general structure-from-motion problem. It is shown that the tracked points can be reliably reconstructed despite the forward motion of the camera. This is achieved by utilizing a fish-eye lens with a wide field of view. The standard techniques for robust estimation of the two- and three-view geometry are modified so that they can be used for calibrated fish-eye lens cameras with a field of view less than 180°. The tubular arrangement of the reconstructed points allows pipe shape estimation by surface fitting. Hence, a method for modelling such surfaces with a locally cylindrical model is proposed. The system is demonstrated with a real sewer video and an error analysis for the recovered structure is presented.  相似文献   

19.
We present an autonomous mobile robot navigation system using stereo fish-eye lenses for navigation in an indoor structured environment and for generating a model of the imaged scene. The system estimates the three-dimensional (3D) position of significant features in the scene, and by estimating its relative position to the features, navigates through narrow passages and makes turns at corridor ends. Fish-eye lenses are used to provide a large field of view, which images objects close to the robot and helps in making smooth transitions in the direction of motion. Calibration is performed for the lens-camera setup and the distortion is corrected to obtain accurate quantitative measurements. A vision-based algorithm that uses the vanishing points of extracted segments from a scene in a few 3D orientations provides an accurate estimate of the robot orientation. This is used, in addition to 3D recovery via stereo correspondence, to maintain the robot motion in a purely translational path, as well as to remove the effects of any drifts from this path from each acquired image. Horizontal segments are used as a qualitative estimate of change in the motion direction and correspondence of vertical segment provides precise 3D information about objects close to the robot. Assuming detected linear edges in the scene as boundaries of planar surfaces, the 3D model of the scene is generated. The robot system is implemented and tested in a structured environment at our research center. Results from the robot navigation in real environments are presented and discussed. Received: 25 September 1996 / Accepted: 20 October 1996  相似文献   

20.
This paper presents a framework and the associated algorithms to perform 3D scene analysis from a single image with lens distortion. Previous work focuses on either making 3D measurements under the assumption of one or more ideal pinhole cameras or correcting the lens distortion up to a projective transformation with no additional metric analyses. In this work, we bridge the gap between these two areas of work by incorporating metric constraints into lens distortion correction to achieve metric calibration. Lens distortion parameters, especially the lens distortion center, can be precisely recovered with this approach. Subsequent 3D measurements can be made from the corrected image to recover scene structures. In addition, we propose an algorithm based on hybrid backward and forward covariance propagation to yield a quantitative analysis of the confidence of the results. Experimental results show that our approach simultaneously performs image correction and 3D scene analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号