首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 837 毫秒
1.
Pan  Baiyu  Zhang  Liming  Yin  Hanxiong  Lan  Jun  Cao  Feilong 《Multimedia Tools and Applications》2021,80(13):19179-19201

3D movies/videos have become increasingly popular in the market; however, they are usually produced by professionals. This paper presents a new technique for the automatic conversion of 2D to 3D video based on RGB-D sensors, which can be easily conducted by ordinary users. To generate a 3D image, one approach is to combine the original 2D color image and its corresponding depth map together to perform depth image-based rendering (DIBR). An RGB-D sensor is one of the inexpensive ways to capture an image and its corresponding depth map. The quality of the depth map and the DIBR algorithm are crucial to this process. Our approach is twofold. First, the depth maps captured directly by RGB-D sensors are generally of poor quality because there are many regions missing depth information, especially near the edges of objects. This paper proposes a new RGB-D sensor based depth map inpainting method that divides the regions with missing depths into interior holes and border holes. Different schemes are used to inpaint the different types of holes. Second, an improved hole filling approach for DIBR is proposed to synthesize the 3D images by using the corresponding color images and the inpainted depth maps. Extensive experiments were conducted on different evaluation datasets. The results show the effectiveness of our method.

  相似文献   

2.
The quality of depth maps affects the quality of generated 3D content. Practically, the depth maps often have lower resolution than that of color images, thus, Depth map Up-sampling (DU) is needed in various 3D applications. DU can yield specific artifacts which can degrade the quality of depth maps as well as constructed stereoscopic (color plus depth map) images. This paper investigates the effect of DU on 3D perception. The depth maps were up-sampled using seven approaches and the quality of stereoscopic images obtained from up-sampled depth maps was estimated through subjective and objective tests. The objective quality prediction was performed using a depth map quality assessment framework. The method is able to predict the quality of stereoscopic images through evaluation of their corresponding up-sampled depth maps using 2D Image Quality Metrics (IQMs). In order to improve the quality estimation, the framework selects the 2D IQMs with highest correlation to subjective test. Furthermore, motivated by previous researches on multiple metrics combination, a new metric fusion method is proposed. Experimental results show that the combined metric delivers higher performance than single metrics in 3D quality prediction.  相似文献   

3.
目的 双目视觉是目标距离估计问题的一个很好的解决方案。现有的双目目标距离估计方法存在估计精度较低或数据准备较繁琐的问题,为此需要一个可以兼顾精度和数据准备便利性的双目目标距离估计算法。方法 提出一个基于R-CNN(region convolutional neural network)结构的网络,该网络可以实现同时进行目标检测与目标距离估计。双目图像输入网络后,通过主干网络提取特征,通过双目候选框提取网络以同时得到左右图像中相同目标的包围框,将成对的目标框内的局部特征输入目标视差估计分支以估计目标的距离。为了同时得到左右图像中相同目标的包围框,使用双目候选框提取网络代替原有的候选框提取网络,并提出了双目包围框分支以同时进行双目包围框的回归;为了提升视差估计的精度,借鉴双目视差图估计网络的结构,提出了一个基于组相关和3维卷积的视差估计分支。结果 在KITTI(Karlsruhe Institute of Technology and Toyota Technological Institute)数据集上进行验证实验,与同类算法比较,本文算法平均相对误差值约为3.2%,远小于基于双目视差图估计算法(11.3%),与基于3维目标检测的算法接近(约为3.9%)。另外,提出的视差估计分支改进对精度有明显的提升效果,平均相对误差值从5.1%下降到3.2%。通过在另外采集并标注的行人监控数据集上进行类似实验,实验结果平均相对误差值约为4.6%,表明本文方法可以有效应用于监控场景。结论 提出的双目目标距离估计网络结合了目标检测与双目视差估计的优势,具有较高的精度。该网络可以有效运用于车载相机及监控场景,并有希望运用于其他安装有双目相机的场景。  相似文献   

4.
Building upon recent developments in optical flow and stereo matching estimation, we propose a variational framework for the estimation of stereoscopic scene flow, i.e., the motion of points in the three-dimensional world from stereo image sequences. The proposed algorithm takes into account image pairs from two consecutive times and computes both depth and a 3D motion vector associated with each point in the image. In contrast to previous works, we partially decouple the depth estimation from the motion estimation, which has many practical advantages. The variational formulation is quite flexible and can handle both sparse or dense disparity maps. The proposed method is very efficient; with the depth map being computed on an FPGA, and the scene flow computed on the GPU, the proposed algorithm runs at frame rates of 20 frames per second on QVGA images (320×240 pixels). Furthermore, we present solutions to two important problems in scene flow estimation: violations of intensity consistency between input images, and the uncertainty measures for the scene flow result.  相似文献   

5.
丝路文化是联系一带一路战略的重要纽带,其传承意义重大,但是由于历史地理原因,丝路文化中代表性的历史遗产分散或损坏,难以有效地呈现,因此,本文面向丝路文化的虚拟展示与数字化,提出并实现了基于虚拟现实技术的丝路文化传承平台,通过历史遗迹复原以及基于图像的三维重建,还原了丝路文化中重要节点宁夏固原有关的历史遗迹、文物和事件....  相似文献   

6.

Depth image based rendering (DIBR) is a popular technique for rendering virtual 3D views in stereoscopic and autostereoscopic displays. The quality of DIBR-synthesized images may decrease due to various factors, e.g., imprecise depth maps, poor rendering techniques, inaccurate camera parameters. The quality of synthesized images is important as it directly affects the overall user experience. Therefore, the need arises for designing algorithms to estimate the quality of the DIBR-synthesized images. The existing 2D image quality assessment metrics are found to be insufficient for 3D view quality estimation because the 3D views not only contain color information but also make use of disparity to achieve the real depth sensation. In this paper, we present a new algorithm for evaluating the quality of DIBR generated images in the absence of the original references. The human visual system is sensitive to structural information; any deg radation in structure or edges affects the visual quality of the image and is easily noticeable for humans. In the proposed metric, we estimate the quality of the synthesized view by capturing the structural and textural distortion in the warped view. The structural and textural information from the input and the synthesized images is estimated and used to calculate the image quality. The performance of the proposed quality metric is evaluated on the IRCCyN IVC DIBR images dataset. Experimental evaluations show that the proposed metric outperforms the existing 2D and 3D image quality metrics by achieving a high correlation with the subjective ratings.

  相似文献   

7.

In this paper, we present an approach for identification of actions within depth action videos. First, we process the video to get motion history images (MHIs) and static history images (SHIs) corresponding to an action video based on the use of 3D Motion Trail Model (3DMTM). We then characterize the action video by extracting the Gradient Local Auto-Correlations (GLAC) features from the SHIs and the MHIs. The two sets of features i.e., GLAC features from MHIs and GLAC features from SHIs are concatenated to obtain a representation vector for action. Finally, we perform the classification on all the action samples by using the l2-regularized Collaborative Representation Classifier (l2-CRC) to recognize different human actions in an effective way. We perform evaluation of the proposed method on three action datasets, MSR-Action3D, DHA and UTD-MHAD. Through experimental results, we observe that the proposed method performs superior to other approaches.

  相似文献   

8.
In this paper, we introduce a novel approach for face depth estimation in a passive stereo vision system. Our approach is based on rapid generation of facial disparity maps, requiring neither expensive devices nor generic face models. It consists in incorporating face properties into the disparity estimation process to enhance the 3D face reconstruction. We propose a model-based method that is independent from the specific stereo algorithm used. Our method is a two-step process. First, an algorithm based on the Active Shape Model (ASM) is proposed to acquire a disparity model specific to the face concerned. Second, using this model as a guidance, the dense disparity is calculated and the depth map is estimated. Besides, an original post-processing algorithm is proposed in order to detect holes and spikes in the generated depth maps caused by wrong matches and uncertainties. It is based on the smoothness property of the face and a local and global analysis of the image. Experimental results are presented to demonstrate the reconstruction accuracy and the speed of the proposed method.  相似文献   

9.
手部姿态估计在人机交互、手功能评估、虚拟现实和增强现实等应用中发挥着重要作用, 为此本文提出了一种新的手部姿态估计方法, 以解决手部区域在大多数图像中占比较小和已有单视图关键点检测算法无法应对遮挡情况的问题. 所提方法首先通过引入Bayesian卷积网络的语义分割模型提取手部目标区域, 在此基础上针对手部定位结果, 利用所提基于注意力机制和级联引导策略的新模型以获得较为准确的手部二维关键点检测结果.然后提出了一种利用立体视觉算法计算关键点深度信息的深度网络, 并在深度估计中提供视角自学习的功能. 该方式以三角测量为基础, 利用RANSAC算法对测量结果进行校准. 最后经过多任务学习和重投影训练对手部关键点的3D检测结果进行优化, 最终提取手部关键点的三维姿态信息. 实验结果表明: 相比于已有的一些代表性人手区域检测算法, 本文方法在人手区域检测上的平均检测精度和运算时间上有一定的改善. 此外, 从本文所提姿态估计方法与已有其他方法的平均端点误差(EPE_mean)和PCK曲线下方面积(AUC)这些指标的对比结果来看, 本文方法的关键点检测性能更优, 因而能获得更好的手部姿态估计结果.  相似文献   

10.

This work presents the design of a real-time system to model visual objects with the use of self-organising networks. The architecture of the system addresses multiple computer vision tasks such as image segmentation, optimal parameter estimation and object representation. We first develop a framework for building non-rigid shapes using the growth mechanism of the self-organising maps, and then we define an optimal number of nodes without overfitting or underfitting the network based on the knowledge obtained from information-theoretic considerations. We present experimental results for hands and faces, and we quantitatively evaluate the matching capabilities of the proposed method with the topographic product. The proposed method is easily extensible to 3D objects, as it offers similar features for efficient mesh reconstruction.

  相似文献   

11.
A vision based approach for calculating accurate 3D models of the objects is presented. Generally industrial visual inspection systems capable of accurate 3D depth estimation rely on extra hardware tools like laser scanners or light pattern projectors. These tools improve the accuracy of depth estimation but also make the vision system costly and cumbersome. In the proposed algorithm, depth and dimensional accuracy of the produced 3D depth model depends on the existing reference model instead of the information from extra hardware tools. The proposed algorithm is a simple and cost effective software based approach to achieve accurate 3D depth estimation with minimal hardware involvement. The matching process uses the well-known coarse to fine strategy, involving the calculation of matching points at the coarsest level with consequent refinement up to the finest level. Vector coefficients of the wavelet transform-modulus are used as matching features, where wavelet transform-modulus maxima defines the shift invariant high-level features with phase pointing to the normal of the feature surface. The technique addresses the estimation of optimal corresponding points and the corresponding 2D disparity maps leading to the creation of accurate depth perception model.  相似文献   

12.

Among the modern means of 3D geometry creation that exist in the literature, there are the Multi-View Stereo (MVS) reconstruction methods that received much attention from the research community and the multimedia industry. In fact, several methods showed that it is possible to recover geometry only from images with reconstruction accuracies paralleling that of excessively expensive laser scanners. The majority of these methods perform on images such as online community photo collection and estimate the surface position with its orientation by minimizing a matching cost function defined over a small local region. However, these datasets not only they are large but also contain more challenging scenes setups with different photometric effects; therefore fine-grained details of an object’s surface cannot be captured. This paper presents a robust multi-view stereo method based on metaheuristic optimization namely the Particle Swarm Optimization (PSO) in order to find the optimal depth, orientation, and surface roughness. To deal with the various shading and stereo mismatch problems caused by rough surfaces, shadows, and interreflections, we propose to use a robust matching/energy function which is a combination of two similarity measurements. Finally, our method computes individual depth maps that can be merged into compelling scene reconstructions. The proposed method is evaluated quantitatively using well-known Middlebury datasets and the obtained results show a high completeness score and comparable accuracy to those of the current top performing algorithms.

  相似文献   

13.
14.
目的 深度图像作为一种普遍的3维场景信息表达方式在立体视觉领域有着广泛的应用。Kinect深度相机能够实时获取场景的深度图像,但由于内部硬件的限制和外界因素的干扰,获取的深度图像存在分辨率低、边缘不准确的问题,无法满足实际应用的需要。为此提出了一种基于彩色图像边缘引导的Kinect深度图像超分辨率重建算法。方法 首先对深度图像进行初始化上采样,并提取初始化深度图像的边缘;进一步利用高分辨率彩色图像和深度图像的相似性,采用基于结构化学习的边缘检测方法提取深度图的正确边缘;最后找出初始化深度图的错误边缘和深度图正确边缘之间的不可靠区域,采用边缘对齐的策略对不可靠区域进行插值填充。结果 在NYU2数据集上进行实验,与8种最新的深度图像超分辨率重建算法作比较,用重建之后的深度图像和3维重建的点云效果进行验证。实验结果表明本文算法在提高深度图像的分辨率的同时,能有效修正上采样后深度图像的边缘,使深度边缘与纹理边缘对齐,也能抑制上采样算法带来的边缘模糊现象;3维点云效果显示,本文算法能准确区分场景中的前景和背景,应用于3维重建等应用能取得较其他算法更好的效果。结论 本文算法普遍适用于Kinect深度图像的超分辨率重建问题,该算法结合同场景彩色图像与深度图像的相似性,利用纹理边缘引导深度图像的超分辨率重建,可以得到较好的重建结果。  相似文献   

15.
Head pose estimation plays an essential role in many high-level face analysis tasks. However, accurate and robust pose estimation with existing approaches remains challenging. In this paper, we propose a novel method for accurate three-dimensional (3D) head pose estimation with noisy depth maps and high-resolution color images that are typically produced by popular RGBD cameras such as the Microsoft Kinect. Our method combines the advantages of the high-resolution RGB image with the 3D information of the depth image. For better accuracy and robustness, features are first detected using only the color image, and then the 3D feature points used for matching are obtained by combining depth information. The outliers are then filtered with depth information using rules proposed for depth consistency, normal consistency, and re-projection consistency, which effectively eliminate the influence of depth noise. The pose parameters are then iteratively optimized using the Extended LM (Levenberg-Marquardt) method. Finally, a Kalman filter is used to smooth the parameters. To evaluate our method, we built a database of more than 10K RGBD images with ground-truth poses recorded using motion capture. Both qualitative and quantitative evaluations show that our method produces notably smaller errors than previous methods.  相似文献   

16.
Detecting objects, estimating their pose, and recovering their 3D shape are critical problems in many vision and robotics applications. This paper addresses the above needs using a two stages approach. In the first stage, we propose a new method called DEHV – Depth-Encoded Hough Voting. DEHV jointly detects objects, infers their categories, estimates their pose, and infers/decodes objects depth maps from either a single image (when no depth maps are available in testing) or a single image augmented with depth map (when this is available in testing). Inspired by the Hough voting scheme introduced in [1], DEHV incorporates depth information into the process of learning distributions of image features (patches) representing an object category. DEHV takes advantage of the interplay between the scale of each object patch in the image and its distance (depth) from the corresponding physical patch attached to the 3D object. Once the depth map is given, a full reconstruction is achieved in a second (3D modelling) stage, where modified or state-of-the-art 3D shape and texture completion techniques are used to recover the complete 3D model. Extensive quantitative and qualitative experimental analysis on existing datasets [2], [3], [4] and a newly proposed 3D table-top object category dataset shows that our DEHV scheme obtains competitive detection and pose estimation results. Finally, the quality of 3D modelling in terms of both shape completion and texture completion is evaluated on a 3D modelling dataset containing both in-door and out-door object categories. We demonstrate that our overall algorithm can obtain convincing 3D shape reconstruction from just one single uncalibrated image.  相似文献   

17.
目的 光场相机可以通过单次曝光同时从多个视角采样单个场景,在深度估计领域具有独特优势。消除遮挡的影响是光场深度估计的难点之一。现有方法基于2D场景模型检测各视角遮挡状态,但是遮挡取决于所采样场景的3D立体模型,仅利用2D模型无法精确检测,不精确的遮挡检测结果将降低后续深度估计精度。针对这一问题,提出了3D遮挡模型引导的光场图像深度获取方法。方法 向2D模型中的不同物体之间添加前后景关系和深度差信息,得到场景的立体模型,之后在立体模型中根据光线的传输路径推断所有视角的遮挡情况并记录在遮挡图(occlusion map)中。在遮挡图引导下,在遮挡和非遮挡区域分别使用不同成本量进行深度估计。在遮挡区域,通过遮挡图屏蔽被遮挡视角,基于剩余视角的成像一致性计算深度;在非遮挡区域,根据该区域深度连续特性设计了新型离焦网格匹配成本量,相比传统成本量,该成本量能够感知更广范围的色彩纹理,以此估计更平滑的深度图。为了进一步提升深度估计的精度,根据遮挡检测和深度估计的依赖关系设计了基于最大期望(exception maximization,EM)算法的联合优化框架,在该框架下,遮挡图和深度图通过互相引导的方式相继提升彼此精度。结果 实验结果表明,本文方法在大部分实验场景中,对于单遮挡、多遮挡和低对比度遮挡在遮挡检测和深度估计方面均能达到最优结果。均方误差(mean square error,MSE)对比次优结果平均降低约19.75%。结论 针对遮挡场景的深度估计,通过理论分析和实验验证,表明3D遮挡模型相比传统2D遮挡模型在遮挡检测方面具有一定优越性,本文方法更适用于复杂遮挡场景的深度估计。  相似文献   

18.
We are currently developing a vision-based system aiming to perform a fully automatic pipeline for in situ photorealistic three-dimensional (3D) modeling of previously unknown, complex and unstructured underground environments. Since in such environments navigation sensors are not reliable, our system embeds only passive (camera) and active (laser) 3D vision senors. Laser Range Finders are particularly well suited for generating dense 3D maps by aligning multiples scans acquired from different viewpoints. Nevertheless, nowadays Iteratively Closest Point (ICP)-based scan matching techniques rely on heavy human operator intervention during a post-processing step. Since a human operator cannot access the site, these techniques are not suitable in high-risk underground environments. This paper presents an automatic on-line scan matcher able to cope with the nowadays 3D laser scanners’ architecture and to process either intensity or depth data to align scans, providing robustness with respect to the capture device. The proposed implementation emphasizes the portability of our algorithm on either single or multi-core embedded platforms for on-line mosaicing onboard 3D scanning devices. The proposed approach addresses key issues for in situ 3D modeling in difficult-to-access and unstructured environments and solves for the 3D scan matching problem within an environment-independent solution. Several tests performed in two prehistoric caves illustrate the reliability of the proposed method.  相似文献   

19.
场景的深度估计问题是计算机视觉领域中的经典问题之一,也是3维重建和图像合成等应用中的一个重要环节。基于深度学习的单目深度估计技术高速发展,各种网络结构相继提出。本文对基于深度学习的单目深度估计技术最新进展进行了综述,回顾了基于监督学习和基于无监督学习方法的发展历程。重点关注单目深度估计的优化思路及其在深度学习网络结构中的表现,将监督学习方法分为多尺度特征融合的方法、结合条件随机场(conditional random field,CRF)的方法、基于序数关系的方法、结合多元图像信息的方法和其他方法等5类;将无监督学习方法分为基于立体视觉的方法、基于运动恢复结构(structure from motion,SfM)的方法、结合对抗性网络的方法、基于序数关系的方法和结合不确定性的方法等5类。此外,还介绍了单目深度估计任务中常用的数据集和评价指标,并对目前基于深度学习的单目深度估计技术在精确度、泛化性、应用场景和无监督网络中不确定性研究等方面的现状和面临的挑战进行了讨论,为相关领域的研究人员提供一个比较全面的参考。  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号