首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 967 毫秒
1.
因三维表面纹理能比二维纹理更好地表现物体的纹理信息,而且随场景光照及视角的变化而变化,所以被广泛用于虚拟现实以及计算机游戏等技术之中。Photometric Stereo作为一种有效的获取三维表面纹理信息的技术而被人们所广泛关注。均匀的光照条件是Photometric Stereo捕获和重建三维表面纹理成功的关键条件。在现实应用中,不均匀光照会导致三维表面纹理在捕获和重建过程中发生失真和畸变。针对这种失真和畸变进行了研究,并提出了一种解决此类问题的方法。实验结果表明,该方法简单可行,有效。  相似文献   

2.
提出了一种基于迭代动态规划的双目Helmholtz立体视觉算法,并将其应用于高光物体的测量。算法首先对获取的Helmholtz图像对作图像校正,然后确定扫描线的端点,最后设计了一种迭代动态规划方法建立匹配获取视差图,从而恢复出高光物体的表面深度。实验采用光线跟踪方法获取带有高光的双目Helmholtz图像,视差图结果表明该方法能够有效地恢复出高光物体的深度信息。  相似文献   

3.
马赓宇  林学訚 《软件学报》2002,13(4):804-811
针对障碍物检测问题中特殊的要求和特点,受原有的利用特征点匹配计算视差方法的启发,提出了一种全新的快速匹配检测障碍物的方法--表面方向法,可以描述平面的方向.与计算深度确定障碍物的方法不同,该算法根据物体表面基本与地面垂直这一特性来检测障碍物.  相似文献   

4.
3D物体检测是计算机视觉的一个重要研究方向,在自动驾驶等领域有着广泛的应用.现有的前沿工作采用端到端的深度学习方法,虽然达到了很好的检测效果但存在着算法复杂度高、计算量大、实时性不够等问题.经过分析发现3D物体检测中的"部分任务"并不适合使用深度学习的方法进行解决,为此提出了一种基于异构方法的3D物体检测方法,该方法在检测过程中同时使用深度学习和传统算法,将检测过程划分为多任务阶段:1)利用深度学习方法从被检测图片中获取被检测物体的mask、物体类别等信息;2)基于mask,利用快速聚类方法从雷达点云空间中筛选出目标物体的表面雷达点;3)利用物体mask、类别、雷达点云等信息计算物体朝向、边框等信息,最终实现3D物体检测.对该方法进行了系统实现,称之为HA3D(a heterogeneous approach for 3D object detection).经实验表明:在针对汽车的3D检测数据集KITTI上,该方法与代表性的基于深度学习的3D物体检测方法相比,在检测精度下降接受范围内(2.0%),速度提升了52.2%,精确率与计算时间的比值提升了49%.从综合表现上来看,方法具有明显的优势.  相似文献   

5.
《传感器与微系统》2019,(11):133-135
针对微纳尺度三维扫描电子显微镜(3D SEM)形貌测量方法的配套算法缺失,严重限制其应用的问题,提出了基于视差—深度映射的微纳尺度三维测量方法。首先,建立视差与样品表面深度信息的映射模型;通过引入极线校正算法用以保证视差求解准确性;基于光流估计来进行对应点的匹配,获得稠密准确的视差图,保证了三维数据质量与精度;最后为了证明本文的D2D—SEM微纳尺度三维形面测量方法的有效性,与微纳测试领域较为常用的超景深三维显微镜(VHX—6000,KEYENCE)对比,对样品进行了三维测量与表征。结果表明:所提方法有效可靠。  相似文献   

6.
双目立体视觉是计算机视觉研究最重要的组成部分之一.该文旨在构建一个双目视觉系统用来测量相机与物体的距离.通过双目相机进行图像采集,使用Matlab Calibration Toolbox工具对相机标定,然后进行左右图像校正,最后对校正的图像进行立体匹配,得到被测物体的视差图及其物体的3D坐标.实验表明:这个方案具有可行...  相似文献   

7.
基于几何特征的曲面物体识别   总被引:4,自引:0,他引:4       下载免费PDF全文
基于几何特征的曲面物体识别方法是通过从景物深度图象上提取景物表面的高斯曲率和平均曲率、曲率直方图,曲率的熵等几何信息,将景物用一个属性关系图ARG来表示,并与模型库中的模型ARG图进行优化匹配,从而来识别曲面景物。该方法主要是针对机器零部件等人造曲面物体的识别问题而设计的,其曲面几何特征的描述方法对二阶曲面比较有效,实验表明,应用该方法可成功地从深度图象中识别机器零部件等曲面物体,且有较好的识别结  相似文献   

8.
面向动态物体场景的视觉SLAM综述   总被引:1,自引:0,他引:1  
针对当前机器人导航、自动驾驶等领域中的热点问题——面向动态物体场景的视觉SLAM(同步定位与地图构建)——进行了综述.根据动态SLAM在定位与建图时对动态物体的不同处理方式,划分了3个研究方向:动态鲁棒性SLAM与静态背景重建、非刚性动态物体跟踪重建、以及移动物体跟踪与重建.对这3个研究方向分别进行了综述,并重点介绍结合了深度学习的动态SLAM方法.最后,展望了动态SLAM的未来发展方向.  相似文献   

9.
本文提出了一种新的局部立体匹配算法.该方法首先将参考图像中的像素分为同质和异质像素;然后对异质像素进行N个方向的能量聚集,利用WTA(Winner Take All)方法选取所有方向上的最优视差,并把统计量最多的视差设为当前点的最终视差,对同质像素利用可移动矩形窗口聚集更多的像素进行匹配;最后,对得到的视差图采用一种快速有效的后处理去除视差图的噪声点.通过实验表明,本文算法在保持高效的同时,能够取得较高的视差准确性,尤其在视差不连续区域和无纹理区域.  相似文献   

10.
目的 从视差图反映影像景物深度变化并与LiDAR系统距离量测信息"同源"这一认识出发,提出一种基于视差互信息的立体航空影像与LiDAR点云自动配准方法.方法 本文方法分为3个阶段:第一、通过半全局匹配SGM(semi-gdabal matching)生成立体航空影像密集视差图;第二、利用航空影像内参数及初始配准参数(外方位元素)对LiDAR点云进行"针孔"透视成像,生成与待配准的立体航空影像空间分辨率、几何形变相接近且具有相同幅面大小的模拟灰度影像-LiDAR深度影像,以互信息作为相似性测度依据估计航空影像视差图与LiDAR深度影像的几何映射关系,进而以之为基础实现LiDAR点云影像概略相关;第三、以LiDAR点云影像概略相关获得的近似同名像点为观测值,以视差互信息为权重,实施摄影测量空间后方交会计算获得优化的影像外方位元素,生成新的LiDAR深度影像并重复上述过程,直至满足给定的迭代计算条件.结果 选取重叠度约60%、幅面大小7 216×5 428像素、空间分辨率约0.5 m的立体航空像对与平均点间距约1.5 m、水平精度约25 cm的LiDAR"点云"进行空间配准实验,配准精度接近1个像素.结论 实验结果表明,本文方法自动化程度高且配准精度适中,理论上适用于不同场景类型、相机内参数已知立体航空影像,具有良好的应用价值.  相似文献   

11.
Stereo image analysis is based on establishing correspondences between a pair of images by determining similarity measures for potentially corresponding image parts. Such similarity criteria are only strictly valid for surfaces with Lambertian (diffuse) reflectance characteristics. Specular reflections are viewpoint dependent and may thus cause large intensity differences at corresponding image points. In the presence of specular reflections, traditional stereo approaches are often unable to establish correspondences at all, or the inferred disparity values tend to be inaccurate, or the established correspondences do not belong to the same physical surface point. The stereo image analysis framework for non-Lambertian surfaces presented in this contribution combines geometric cues with photometric and polarimetric information into an iterative scheme that allows to establish stereo correspondences in accordance with the specular reflectance behaviour and at the same time to determine the surface gradient field based on the known photometric and polarimetric reflectance properties. The described approach yields a dense 3D reconstruction of the surface which is consistent with all observed geometric and photopolarimetric data. Initially, a sparse 3D point cloud of the surface is computed by traditional blockmatching stereo. Subsequently, a dense 3D profile of the surface is determined in the coordinate system of camera 1 based on the shape from photopolarimetric reflectance and depth technique. A synthetic image of the surface is rendered in the coordinate system of camera 2 using the illumination direction and reflectance properties of the surface material. Point correspondences between the rendered image and the observed image of camera 2 are established with the blockmatching technique. This procedure yields an increased number of 3D points of higher accuracy, compared to the initial 3D point cloud. The improved 3D point cloud is used to compute a refined dense 3D surface profile. These steps are iterated until convergence of the 3D reconstruction. An experimental evaluation of our method is provided for areas of several square centimetres of forged and cast iron objects with rough surfaces displaying both diffuse and significant specular reflectance components, where traditional stereo image analysis largely fails. A comparison to independently measured ground truth data reveals that the root-mean-square error of the 3D reconstruction results is typically of the order 30–100 μm at a lateral pixel resolution of 86 μm. For two example surfaces, the number of stereo correspondences established by the specular stereo algorithm is several orders of magnitude higher than the initial number of 3D points. For one example surface, the number of stereo correspondences decreases by a factor of about two, but the 3D point cloud obtained with the specular stereo method is less noisy, contains a negligible number of outliers, and shows significantly more surface detail than the initial 3D point cloud. For poorly known reflectance parameters we observe a graceful degradation of the accuracy of 3D reconstruction.  相似文献   

12.
Several algorithms are suggested for recovering depth and orientation maps of a surface from its image intensities. They combine the advantages of stereo vision and shape-from-shading (SFS) methods. These algorithms generate accurate, unambiguous and dense surface depth and orientation maps. Most of the existing SFS algorithms cannot be directly extended to combine stereo images because the recovery of surface depth and that of orientation are separated in these formulations. We first present an SFS algorithm that couples the generation of depth and orientation maps. This formulation also ensures that the reconstructed surface depth and its orientation are consistent. The SFS algorithm for a single image is then extended to utilize stereo images. The correspondence over stereo images is established simultaneously with the generation of surface depth and orientation. An alternative approach is also suggested for combining stereo and SFS techniques. This approach can be used to combine needle maps which are directly available from other sources such as photometric stereo. Finally we present an algorithm to combine sparse depth measurements with an orientation map to reconstruct a surface. The same algorithm can be combined with the above algorithms for solving the SFS problem with sparse depth measurements. Thus various information sources can be used to accurately reconstruct a surface.  相似文献   

13.
为了搜寻移动机器人周围最大的可通行区域,采用全向立体视觉系统,提出获取可靠的致密三维深度图方法。视觉系统由1个普通相机和2个双曲面镜组成。当系统标定后,空间点的三维坐标可以通过匹配上下镜面的成像点计算得出。匹配方法分3步:最大FX匹配,特征匹配和歧义去除。定义合适的能量函数通过动态规划来实现剩余点的匹配。实验表明该系统精度高、具有实用价值。  相似文献   

14.
This paper introduces a 3D imaging framework that combines high-resolution photometric stereo and low-resolution depth. Our approach targets imaging scenarios based on either macro-lens photography combined with focal stacking or a large-format camera that are able to image objects with more than 600 samples per mm $^2$ . These imaging techniques allow photometric stereo algorithms to obtain surface normals at resolutions that far surpass corresponding depth values obtained with traditional approaches such as structured-light, passive stereo, or depth-from-focus. Our work offers two contributions for 3D imaging based on these scenarios. The first is a multi-resolution, patched-based surface reconstruction scheme that can robustly handle the significant resolution difference between our surface normals and depth samples. The second is a method to improve the initial normal estimation by using all the available focal information for images obtained using a focal stacking technique.  相似文献   

15.
目的 深度信息的获取是3维重建、虚拟现实等应用的关键技术,基于单目视觉的深度信息获取是非接触式3维测量技术中成本最低、也是技术难度最大的手段。传统的单目方法多基于线性透视、纹理梯度、运动视差、聚焦散焦等深度线索来对深度信息进行求取,计算量大,对相机精度要求高,应用场景受限,本文基于固定光强的点光源在场景中的移动所带来的物体表面亮度的变化,提出一种简单快捷的单目深度提取方法。方法 首先根据体表面反射模型,得到光源照射下的物体表面的辐亮度,然后结合光度立体学推导物体表面辐亮度与摄像机图像亮度之间的关系,在得到此关系式后,设计实验,依据点光源移动所带来的图像亮度的变化对深度信息进行求解。结果 该算法在简单场景和一些日常场景下均取得了较好的恢复效果,深度估计值与实际深度值之间的误差小于10%。结论 本文方法通过光源移动带来的图像亮度变化估计深度信息,避免了复杂的相机标定过程,计算复杂度小,是一种全新的场景深度信息获取方法。  相似文献   

16.
利用光度立体法生成真实物体的三维模型   总被引:1,自引:0,他引:1       下载免费PDF全文
在计算机三维动画制作中,要制作一个具体的真实物体的三维模型,是一项复杂而困难的事情。本文根据由阴影恢复形状的视觉理论,利用光度立体视觉技术,给出了由3幅光移图象计算物体表面方向和深度信息,并利用贝塞尔曲线拟合获得物体表面的结构模型的方法。  相似文献   

17.
目的 双目视觉是目标距离估计问题的一个很好的解决方案。现有的双目目标距离估计方法存在估计精度较低或数据准备较繁琐的问题,为此需要一个可以兼顾精度和数据准备便利性的双目目标距离估计算法。方法 提出一个基于R-CNN(region convolutional neural network)结构的网络,该网络可以实现同时进行目标检测与目标距离估计。双目图像输入网络后,通过主干网络提取特征,通过双目候选框提取网络以同时得到左右图像中相同目标的包围框,将成对的目标框内的局部特征输入目标视差估计分支以估计目标的距离。为了同时得到左右图像中相同目标的包围框,使用双目候选框提取网络代替原有的候选框提取网络,并提出了双目包围框分支以同时进行双目包围框的回归;为了提升视差估计的精度,借鉴双目视差图估计网络的结构,提出了一个基于组相关和3维卷积的视差估计分支。结果 在KITTI(Karlsruhe Institute of Technology and Toyota Technological Institute)数据集上进行验证实验,与同类算法比较,本文算法平均相对误差值约为3.2%,远小于基于双目视差图估计算法(11.3%),与基于3维目标检测的算法接近(约为3.9%)。另外,提出的视差估计分支改进对精度有明显的提升效果,平均相对误差值从5.1%下降到3.2%。通过在另外采集并标注的行人监控数据集上进行类似实验,实验结果平均相对误差值约为4.6%,表明本文方法可以有效应用于监控场景。结论 提出的双目目标距离估计网络结合了目标检测与双目视差估计的优势,具有较高的精度。该网络可以有效运用于车载相机及监控场景,并有希望运用于其他安装有双目相机的场景。  相似文献   

18.
This paper presents a novel technique for reflectance function (BRDF) estimation, which uses polarisation information and photometric stereo. The first stage of the technique is standard and involves the acquisition of polarisation information (angle and degree of polarisation) using a linear polariser and a digital camera. This yields a field of ambiguous surface normal estimates for an arbitrarily shaped object. A photometric stereo algorithm is then used with three different light source directions to disambiguate the surface normals. Next, the proposed algorithm constructs a 3D histogram of the surface normals and pixel brightnesses. A surface, representing the BRDF, is then fitted to the histogram data using simulated annealing optimisation. The result is a set of Cartesian triples that relate the surface normals to the observed pixel brightnesses. Unlike most previous techniques for BRDF estimation, the technique is image-based and does not require sophisticated equipment or intrusive light sources. Although the technique is restricted to smooth and slightly rough dielectric objects, no prior knowledge about the surface geometry is assumed.  相似文献   

19.
Shape Reconstruction of 3D Bilaterally Symmetric Surfaces   总被引:1,自引:0,他引:1  
The paper presents a new approach for shape recovery based on integrating geometric and photometric information. We consider 3D bilaterally symmetric objects, that is, objects which are symmetric with respect to a plane (e.g., faces), and their reconstruction from a single image. Both the viewpoint and the illumination are not necessarily frontal. Furthermore, no correspondence between symmetric points is required.The basic idea is that an image taken from a general, non frontal viewpoint, under non-frontal illumination can be regarded as a pair of images. Each image of the pair is one half of the object, taken from different viewing positions and with different lighting directions. Thus, one-image-variants of geometric stereo and of photometric stereo can be used. Unlike the separate invocation of these approaches, which require point correspondence between the two images, we show that integrating the photometric and geometric information suffice to yield a dense correspondence between pairs of symmetric points, and as a result, a dense shape recovery of the object. Furthermore, the unknown lighting and viewing parameters, are also recovered in this process.Unknown distant point light source, Lambertian surfaces, unknown constant albedo, and weak perspective projection are assumed. The method has been implemented and tested experimentally on simulated and real data.  相似文献   

20.
We show that using example-based photometric stereo, it is possible to achieve realistic reconstructions of the human face. The method can handle non-Lambertian reflectance and attached shadows after a simple calibration step. We use spherical harmonics to model and de-noise the illumination functions from images of a reference object with known shape, and a fast grid technique to invert those functions and recover the surface normal for each point of the target object. The depth coordinate is obtained by weighted multi-scale integration of these normals, using an integration weight mask obtained automatically from the images themselves. We have applied these techniques to improve the PhotoFace system of Hansen et al. (2010).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号