首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The depth information of the scene camera, and depth extraction is a key technology indicates the distance between the object and the in 3D video system. The emergence of Kinect makes the high resolution depth map capturing possible. However, the depth map captured by Kinect can not be directly used due to the existing holes and noises, which needs to be repaired. We propose a texture combined inpainting algorithm in this paper. Firstly, the foreground is segmented combined with the color characteristics of the texture image to repair the foreground of the depth map. Secondly, region growing is used to determine the match region of the hole in the depth map, and to accurately position the match region according to the texture information. Then the match region is weighted to fill the hole. Finally, a Gaussian filter is used to remove the noise in the depth map. Experimental results show that the proposed method can effectively repair the holes existing in the original depth map and get an accurate and smooth depth map, which can be used to render a virtual image with good quality.  相似文献   

2.
In this paper, an efficient depth image-based rending (DIBR) with depth reliability maps (DRM) is proposed to improve the quality of synthesized images. First, a DRM-based occlusion-aware approach is developed to obtain a segmentation mask, which can explicitly indicate where the information in an intermediate image should be blended preferably. Next, an improved weight model for view creation is introduced to enhance the quality of synthesized images. Finally, a distance and depth-based sub-pixel weighted (DDSPW) algorithm is presented to solve the visibility and resampling problems. Experimental results demonstrate that the treated DIBR schemes have better performance for view synthesis than the other three methods through the subjective visual perception and objective assessments in terms of peak signal to noise ratio and structural similarity index.  相似文献   

3.
Hole and crack filling is the most important issue in depth-image-based rendering (DIBR) algorithms for generating virtual view images when only one view image and one depth map are available. This paper proposes a priority patch inpainting algorithm for hole filling in DIBR algorithms by generating multiple virtual views. A texture-based interpolation method is applied for crack filling. Then, an inpainting-based algorithm is applied patch by patch for hole filling. A prioritized method for selecting the critical patch is also proposed to reduce computation time. Finally, the proposed method is realized on the compute unified device architecture parallel computing platform which runs on a graphics processing unit. Simulation results show that the proposed algorithm is 51-fold faster for virtual view synthesis and achieves better virtual view quality compared to the traditional DIBR algorithm which contains depth preprocessing, warping, and hole filling.  相似文献   

4.
In this paper, we present a new depth upsampler in which an upsampled depth map is computed at each pixel as the average of neighboring pixels, weighted by color and depth intensity filters. The proposed method features two parameters, an adaptive smoothing parameter and a control parameter. The adaptive smoothing parameter is determined based on the ratio between a depth map and its corresponding color image. The adaptive smoothing parameter is used to control the dynamic range of the color-range filter. The control parameter assigns a larger weighting factor to pixels in the object to which a missing pixel belongs. In a comparison with five existing upsamplers, the proposed method outperforms all five in terms of both objective and subjective quality.  相似文献   

5.
In multiview video plus depth (MVD) format, virtual views are generated from decoded texture videos with corresponding decoded depth images through depth image based rendering (DIBR). 3DV-ATM is a reference model for the H.264/AVC based multiview video coding (MVC) and aims at achieving high coding efficiency for 3D video in MVD format. Depth images are first downsampled then coded by 3DV-ATM. However, sharp object boundary characteristic of depth images does not well match with the transform coding based nature of H.264/AVC in 3DV-ATM. Depth boundaries are often blurred with ringing artifacts in the decoded depth images that result in noticeable artifacts in synthesized virtual views. This paper presents a low complexity adaptive depth truncation filter to recover the sharp object boundaries of the depth images using adaptive block repositioning and expansion for increasing the depth values refinement accuracy. This new approach is very efficient and can avoid false depth boundary refinement when block boundaries lie around the depth edge regions and ensure sufficient information within the processing block for depth layers classification. Experimental results demonstrate that the sharp depth edges can be recovered using the proposed filter and boundary artifacts in the synthesized views can be removed. The proposed method can provide improvement up to 3.25 dB in the depth map enhancement and bitrate reduction of 3.06% in the synthesized views.  相似文献   

6.
This paper presents new hole‐filling methods for generating multiview images by using depth image based rendering (DIBR). Holes appear in a depth image captured from 3D sensors and in the multiview images rendered by DIBR. The holes are often found around the background regions of the images because the background is prone to occlusions by the foreground objects. Background‐oriented priority and gradient‐oriented priority are also introduced to find the order of hole‐filling after the DIBR process. In addition, to obtain a sample to fill the hole region, we propose the fusing of depth and color information to obtain a weighted sum of two patches for the depth (or rendered depth) images and a new distance measure to find the best‐matched patch for the rendered color images. The conventional method produces jagged edges and a blurry phenomenon in the final results, whereas the proposed method can minimize them, which is quite important for high fidelity in stereo imaging. The experimental results show that, by reducing these errors, the proposed methods can significantly improve the hole‐filling quality in the multiview images generated.  相似文献   

7.
提出了一种基于深度融合的深度图像修 复算法。对于单幅深度图像,首先利用形态学操作进行空洞区域优化,消除深度图像中的间 隙和随机噪声;然后针对迭 代滤波过程,提出一种新的深度融合策略计算深度值,并通过对空洞区域的分析,判断深度 图像中空洞区 域类型,自适应选择结构元进行迭代操作;最后利用局部深度值重建方法对受损的边缘处深 度值进行修复。 实验结果表明,本文算法在较好的修复深度图像中存在空洞和间隙的同时,能够保持原始深 度图深度值分 布规律,克服修复过程中存在的深度值失真,边缘模糊等不足。基于标准数据集Middlebury 的对比试验结果表明,本文算法与其它算法相比,获得了良好的效果。  相似文献   

8.
Recent development of depth acquiring technique has accelerated the progress of 3D video in the market. Utilizing the acquired depth, arbitrary view frames can be generated based on depth image based rendering (DIBR) technique in free viewpoint video system. Different from texture video, depth sequence is mainly utilized for virtual view generation rather than viewing. Inspired by this, a depth frame interpolation scheme using texture information is proposed in this paper. The proposed scheme consists of a texture aided motion estimation (TAME) and texture aided motion compensation (TAMC) to fully explore the correlation between depth and the accompanying textures. The optimal motion vectors in TAME and the best interpolation weights in TAMC are respectively selected taking the geometric mapping relationship between depth and the accompanying texture frames into consideration. The proposed scheme is able to not only maintain the temporal consistency among interpolated depth sequence but also improve the quality of virtual frames generated by interpolated depth. Besides, it can be easily applied to arbitrary motion compensation based frame interpolation scheme. Experimental results demonstrate that the proposed depth frame interpolation scheme is able to improve the quality of virtual view texture frames in both subjective and objective criterions compared with existing schemes.  相似文献   

9.
In this paper, a new coding method for multiview depth video is presented. Considering the smooth structure and sharp edges of depth maps, a segmentation based approach is proposed. This allows further preserving the depth contours thus introducing fewer artifacts in the depth perception of the video. To reduce the cost associated with partition coding, an approximation of the depth partition is built using the decoded color view segmentation. This approximation is refined by sending some complementary information about the relevant differences between color and depth partitions. For coding the depth content of each region, a decomposition into orthogonal basis is used in this paper although similar decompositions may be also employed. Experimental results show that the proposed segmentation based depth coding method outperforms H.264/AVC and H.264/MVC by more than 2 dB at similar bitrates.  相似文献   

10.
This paper presents a novel block-adaptive quantization scheme for efficient bit allocation without side information in depth map coding. Since the type of distortion in a depth map causes different effects in terms of the visual artifacts in a synthesized view, the proposed method adaptively assigns the number of bits according to the characteristics of the corresponding texture block. I have studied the details of the depth map and its rendered view distortion, modeled these analytically, and then proposed a new rate and distortion model for depth map coding. Finally, I derived a simple closed-form solution based on my proposed rate and distortion model, which determines the block-adaptive quantization parameter without any side information. Experimental results show that the proposed scheme can achieve coding gains of more than 0.6% and 1.4% for quarter- and full-resolution depth maps, respectively, in a multi-view-plus-depth 3D system.  相似文献   

11.
The quality of the synthesized views by Depth Image Based Rendering (DIBR) highly depends on the accuracy of the depth map, especially the alignment of object boundaries of texture image. In practice, the misalignment of sharp depth map edges is the major cause of the annoying artifacts at the disoccluded regions of the synthesized views. Conventional smooth filter approach blurs the depth map to reduce the disoccluded regions. The drawbacks are the degradation of 3D perception of the reconstructed 3D videos and the destruction of the texture in background regions. Conventional edge preserving filter utilizes the color image information in order to align the depth edges with color edges. Unfortunately, the characteristics of color edges and depth edges are very different which causes annoying boundaries artifacts in the synthesized virtual views. Recent solution of reliability-based approach uses reliable warping information from other views to fill the holes. However, it is not suitable for the view synthesis in video-plus-depth based DIBR applications. In this paper, a new depth map preprocessing approach is proposed. It utilizes Watershed color segmentation method to correct the depth map misalignment and then the depth map object boundaries are extended to cover the transitional edge regions of color image. This approach can handle the sharp depth map edges lying inside or outside the object boundaries in 2D sense. The quality of the disoccluded regions of the synthesized views can be significantly improved and unknown depth values can also be estimated. Experimental results show that the proposed method achieves superior performance for view synthesis by DIBR especially for generating large baseline virtual views.  相似文献   

12.
为降低深度数据的编码复杂度并保证重建虚拟视点的质量,提出了一种基于JNDD模型面向虚拟视点绘制的快速深度图编码算法,引入最小可觉深度差模型,将深度图划分为对绘制失真敏感的竖直边缘区域与失真难以被人眼察觉的平坦区域,并相应地为编码过程中的宏块模式选择设计了两种搜索策略。实验结果表明,与JM编码方案相比,本文所提出的方法在保证虚拟视质量与编码码率基本不变的前提下,显著降低了编码复杂度,有助于在三维视频系统中提高深度编码模块的编码速度。  相似文献   

13.
Depth completion, which combines additional sparse depth information from the range sensors, substantially improves the accuracy of monocular depth estimation, especially using the deep-learning-based methods. However, these methods can hardly produce satisfactory depth results when the sensor configuration changes at test time, which is important for real-world applications. In this paper, the problem is tackled by our proposed novel two-stage mechanism, which decomposes depth completion into two subtasks, namely relative depth map estimation and scale recovery. The relative depth map is first estimated from a single color image with our designed scale-invariant loss function. Then the scale map is recovered with the additional sparse depth. Experiments on different densities and patterns of the sparse depth input show that our model always produces satisfactory depth results. Besides, our approach achieves state-of-the-art performance on the indoor NYUv2 dataset and performs competitively on the outdoor KITTI dataset, demonstrating the effectiveness of our method.  相似文献   

14.
An improved DIBR-based (Depth image based rendering) whole frame error concealment method for multiview video with depth is designed. An optimal reference view selection is first proposed. The paper further includes three modified parts for the DIBRed pixels. First, the missing 1-to-1 pixels are concealed by the pixels from another view. The light differences between views are taken care of by the information of the motion vector of the projected coordination and a reverse DIBR procedure. Second, the generation of the many-to-1 pixels is improved via their depth information. Third, the hole pixels are found using the estimated motion vectors derived efficiently from a weighted function of the neighboring available motion vectors and their distance to the target hole pixel. The experimental results show that, compared to the state-of-the-art method, the combined system of the four proposed methods is superior and improves the performance by 5.53 dB at maximum.  相似文献   

15.
16.
With the prevalence of face authentication applications, the prevention of malicious attack from fake faces such as photos or videos, i.e., face anti-spoofing, has attracted much attention recently. However, while an increasing number of works on the face anti-spoofing have been reported based on 2D RGB cameras, most of them cannot handle various attacking methods. In this paper we propose a robust representation jointly modeling 2D textual information and depth information for face anti-spoofing. The textual feature is learned from 2D facial image regions using a convolutional neural network (CNN), and the depth representation is extracted from images captured by a Kinect. A face in front of the camera is classified as live if it is categorized as live using both cues. We collected a face anti-spoofing experimental dataset with depth information, and reported extensive experimental results to validate the robustness of the proposed method.  相似文献   

17.
Time-of-Flight (ToF) sensors are popular devices that extract 3D information from a scene but result to be susceptible to noise and loss of data creating holes and gaps in the boundaries of the objects. The most common approaches to tackling this problem are supported by color images with good results, however, not all ToF devices produce color information. Mathematical morphology provides operators that can manage the problem of noise in single depth frames. In this paper, a new method for the filtering of single depth maps, when no color image is available, is presented, based on a modification to the morphological closing by reconstruction algorithm. The proposed method eliminates noise, emphasizing a high contour preservation, and it is compared, both qualitative and quantitatively, with other state-of-the-art filters. The proposed method represents an improvement to the closing by reconstruction algorithm that can be applied for filter depth maps of ToF devices.  相似文献   

18.
基于Kinect的实时深度提取与多视绘制算法   总被引:4,自引:3,他引:1  
王奎  安平  张艳  程浩  张兆扬 《光电子.激光》2012,(10):1949-1956
提出了一种基于Kinect的实时深度提取算法和单纹理+深度的多视绘制方法。在采集端,使用Kinect提取场景纹理和深度,并针对Kinect输出深度图的空洞提出一种快速修复算法。在显示端,针对单纹理+深度的基于深度图像的绘制(DIBR,depth image based rendering)绘制产生的大空洞,采用一种基于背景估计和前景分割的绘制方法。实验结果表明,本文方法可实时提取质量良好的深度图,并有效修复了DIBR绘制过程中产生的大空洞,得到质量较好的多路虚拟视点图像。以所提出的深度获取和绘制算法为核心,实现了一种基于深度的立体视频系统,最终的虚拟视点交织立体显示的立体效果良好,进一步验证了本文算法的有效性。本文系统可用于实景的多视点立体视频录制与播放。  相似文献   

19.
With the emerging development of three-dimensional (3D) related technologies, 3D visual saliency modeling is becoming particularly important and challenging. This paper presents a new depth perception and visual comfort guided saliency computational model for stereoscopic 3D images. The prominent advantage of the proposed model is that we incorporate the influence of depth perception and visual comfort on 3D visual saliency computation. The proposed saliency model is composed of three components: 2D image saliency, depth saliency and visual comfort based saliency. In the model, color saliency, texture saliency and spatial compactness are computed respectively and fused to derive 2D image saliency. Global disparity contrast is considered to compute depth saliency. Particularly, we train a visual comfort prediction function to distinguish stereoscopic image pair as high comfortable stereo viewing (HCSV) or low comfortable stereo viewing (LCSV), and devise different computational rules to generate a visual comfort based saliency map. The final 3D saliency map is obtained by using a linear combination and enhanced by a “saliency-center bias” model. Experimental results show that the proposed 3D saliency model outperforms the state-of-the-art models on predicting human eye fixations and visual comfort assessment.  相似文献   

20.
This paper addresses depth data recovery in multiview video-plus-depth communications affected by transmission errors and/or packet loss. The novel aspects of the proposed method rely on the use of geometric transforms and warping vectors, capable of capturing complex motion and view-dependent deformations, which are not efficiently handled by traditional motion and/or disparity compensation methods. By exploiting the geometric nature of depth information, a region matching approach combined with depth contour reconstruction is devised to achieve accurate interpolation of arbitrary shapes within lost regions of depth maps. The simulation results show that, for different packet loss rates, up to 20%, the depth maps recovered by the proposed method produce virtual views with better quality than existing methods based on motion information and spatial interpolation. An average PSNR gain of 1.48 dB is obtained in virtual views synthesised from depth maps using the proposed method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号