首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
Traditional depth estimation methods typically exploit the effect of either the variations in internal parameters such as aperture and focus (as in depth from defocus), or variations in extrinsic parameters such as position and orientation of the camera (as in stereo). When operating off-the-shelf (OTS) cameras in a general setting, these parameters influence the depth of field (DOF) and field of view (FOV). While DOF mandates one to deal with defocus blur, a larger FOV necessitates camera motion during image acquisition. As a result, for unfettered operation of an OTS camera, it becomes inevitable to account for pixel motion as well as optical defocus blur in the captured images. We propose a depth estimation framework using calibrated images captured under general camera motion and lens parameter variations. Our formulation seeks to generalize the constrained areas of stereo and shape from defocus (SFD)/focus (SFF) by handling, in tandem, various effects such as focus variation, zoom, parallax and stereo occlusions, all under one roof. One of the associated challenges in such an unrestrained scenario is the problem of removing user-defined foreground occluders in the reference depth map and image (termed inpainting of depth and image). Inpainting is achieved by exploiting the cue from motion parallax to discover (in other images) the correspondence/color information missing in the reference image. Moreover, considering the fact that the observations could be differently blurred, it is important to ensure that the degree of defocus in the missing regions (in the reference image) is coherent with the local neighbours (defocus inpainting).  相似文献   

3.
4.
Owing to recent advances in depth sensors and computer vision algorithms, depth images are often available with co-registered color images. In this paper, we propose a simple but effective method for obtaining an all-in-focus (AIF) color image from a database of color and depth image pairs. Since the defocus blur is inherently depth-dependent, the color pixels are first grouped according to their depth values. The defocus blur parameters are then estimated using the amount of the defocus blur of the grouped pixels. Given a defocused color image and its estimated blur parameters, the AIF image is produced by adopting the conventional pixel-wise mapping technique. In addition, the availability of the depth image disambiguates the objects located far or near from the in-focus object and thus facilitates image refocusing. We demonstrate the effectiveness of the proposed algorithm using both synthetic and real color and depth images.  相似文献   

5.
基于稀疏表示和结构自相似性的单幅图像盲解卷积算法   总被引:1,自引:0,他引:1  
常振春  禹晶  肖创柏  孙卫东 《自动化学报》2017,43(11):1908-1919
图像盲解卷积研究当模糊核未知时,如何从模糊图像复原出原始清晰图像.由于盲解卷积是一个欠定问题,现有的盲解卷积算法都直接或间接地利用各种先验知识.本文提出了一种结合稀疏表示与结构自相似性的单幅图像盲解卷积算法,该算法将图像的稀疏性先验和结构自相似性先验作为正则化约束加入到图像盲解卷积的目标函数中,并利用图像不同尺度间的结构自相似性,将观测模糊图像的降采样图像作为稀疏表示字典的训练样本,保证清晰图像在该字典下的稀疏性.最后利用交替求解的方式估计模糊核和清晰图像.模拟和真实数据上的实验表明本文算法能够准确估计模糊核,复原清晰的图像边缘,并具有很好的鲁棒性.  相似文献   

6.
360度折反射成像系统由于其曲面反射的成像特点,获得的全向图像会不可避免地产生散焦模糊问题,使得全向图像的应用受到限制。针对全向图像成像过程中的散焦模糊问题,首先通过分析折反射系统中入射光线和图像模糊的关系,建立全向图像散焦数学模型;然后利用编码孔径技术估计出图像中各个位置的模糊程度;最后通过反卷积算法复原图像。通过该方法,可以很好地消除全向图像的散焦模糊,获得全聚焦的清晰全向图像。获得的全聚焦全向图像在全向智能视频监控、战场环境全向智能感知以及机器人全向视觉等领域具有较大应用价值。  相似文献   

7.
Depth from defocus (DFD) is a technique that restores scene depth based on the amount of defocus blur in the images. DFD usually captures two differently focused images, one near-focused and the other far-focused, and calculates the size of the defocus blur in these images. However, DFD using a regular circular aperture is not sensitive to depth, since the point spread function (PSF) is symmetric and only the radius changes with the depth. In recent years, the coded aperture technique, which uses a special pattern for the aperture to engineer the PSF, has been used to improve the accuracy of DFD estimation. The technique is often used to restore an all-in-focus image and estimate depth in DFD applications. Use of a coded aperture has a disadvantage in terms of image deblurring, since deblurring requires a higher signal-to-noise ratio (SNR) of the captured images. The aperture attenuates incoming light in controlling the PSF and, as a result, decreases the input image SNR. In this paper, we propose a new computational imaging approach for DFD estimation using focus changes during image integration to engineer the PSF. We capture input images with a higher SNR since we can control the PSF with a wide aperture setting unlike with a coded aperture. We confirm the effectiveness of the method through experimental comparisons with conventional DFD and the coded aperture approach.  相似文献   

8.
离焦测距算法是一种用于恢复场景深度信息的常用算法。传统的离焦测距算法通常需要采集多幅离焦图像,实际应用中具有很大的制约性。文中基于局部模糊估计提出单幅离焦图像深度恢复算法。基于局部模糊一致性的假设,本文采用简单而有效的两步法恢复输入图像的深度信息:1)通过求取输入离焦图和利用已知高斯核再次模糊图之间的梯度比得到边缘处稀疏模糊图 2)将边缘位置模糊值扩离至全部图像,完整的相对深度信息即可恢复。为了获得准确的场景深度信息,本文加入几何条件约束、天空区域提取策略来消除颜色、纹理以及焦点平面歧义性带来的影响,文中对各种类型的图片进行对比实验,结果表明该算法能在恢复深度信息的同时有效抑制图像中的歧义性。  相似文献   

9.
彭天奇  禹晶  肖创柏 《自动化学报》2022,48(10):2508-2525
在模糊核未知的情况下对模糊图像进行复原称为盲解卷积问题,这是一个欠定逆问题,现有的大部分盲解卷积算法利用图像的各种先验知识约束问题的解空间.由于清晰图像的跨尺度自相似性强于模糊图像的跨尺度自相似性,且降采样模糊图像与清晰图像具有更强的相似性,本文提出了一种基于跨尺度低秩约束的单幅图像盲解卷积算法,利用图像跨尺度自相似性,在降采样图像中搜索相似图像块构成相似图像块组,从整体上对相似图像块组进行低秩约束,作为正则项加入到图像盲解卷积的目标函数中,迫使重建图像的边缘接近清晰图像的边缘.本文算法没有对噪声进行特殊处理,由于低秩约束更好地表示了数据的全局结构特性,因此避免了盲解卷积过程受噪声的干扰.在模糊图像和模糊有噪图像上的实验验证了本文的算法能够解决大尺寸模糊核的盲复原并对噪声具有良好的鲁棒性.  相似文献   

10.
In this paper, we propose a MAP-Markov random field (MRF) based scheme for recovering the depth and the focused image of a scene from two defocused images. The space-variant blur parameter and the focused image of the scene are both modeled as MRFs and their MAP estimates are obtained using simulated annealing. The scheme is amenable to the incorporation of smoothness constraints on the spatial variations of the blur parameter as well as the scene intensity. It also allows for inclusion of line fields to preserve discontinuities. The performance of the proposed scheme is tested on synthetic as well as real data and the estimates of the depth are found to be better than that of the existing window-based depth from defocus technique. The quality of the space-variant restored image of the scene is quite good even under severe space-varying blurring conditions  相似文献   

11.
Reducing the defocus blur that arises from the finite aperture size and short exposure time is an essential problem in computational photography. It is very challenging because the blur kernel is spatially varying and difficult to estimate by traditional methods. Due to its great breakthrough in low-level tasks, convolutional neural networks (CNNs) have been introduced to the defocus deblurring problem and achieved significant progress. However, previous methods apply the same learned kernel for different regions of the defocus blurred images, thus it is difficult to handle nonuniform blurred images. To this end, this study designs a novel blur-aware multi-branch network (BaMBNet), in which different regions are treated differentially. In particular, we estimate the blur amounts of different regions by the internal geometric constraint of the dual-pixel (DP) data, which measures the defocus disparity between the left and right views. Based on the assumption that different image regions with different blur amounts have different deblurring difficulties, we leverage different networks with different capacities to treat different image regions. Moreover, we introduce a meta-learning defocus mask generation algorithm to assign each pixel to a proper branch. In this way, we can expect to maintain the information of the clear regions well while recovering the missing details of the blurred regions. Both quantitative and qualitative experiments demonstrate that our BaMBNet outperforms the state-of-the-art (SOTA) methods. For the dual-pixel defocus deblurring (DPD)-blur dataset, the proposed BaMBNet achieves 1.20 dB gain over the previous SOTA method in term of peak signal-to-noise ratio (PSNR) and reduces learnable parameters by 85%. The details of the code and dataset are available at https://github.com/junjun-jiang/BaMBNet.   相似文献   

12.
Neural Processing Letters - Blind image deconvolution aims to estimate both a blur kernel and a sharp image from a blurry observation. It is not only a classical problem in image processing, but...  相似文献   

13.
The recovery of depth from defocused images involves calculating the depth of various points in a scene by modeling the effect that the focal parameters of the camera have on images acquired with a small depth of field. In the approach to depth from defocus (DFD), previous methods assume the depth to be constant over fairly large local regions and estimate the depth through inverse filtering by considering the system to be shift-invariant over those local regions. But a subimage when analyzed in isolation introduces errors in the estimate of the depth. In this paper, we propose two new approaches for estimating the depth from defocused images. The first approach proposed here models the DFD system as a block shift-variant one and incorporates the interaction of blur among neighboring subimages in an attempt to improve the estimate of the depth. The second approach looks at the depth from defocus problem in the space-frequency representation framework. In particular, the complex spectrogram and the Wigner distribution are shown to be likely candidates for recovering the depth from defocused images. The performances of the proposed methods are tested on both synthetic and real images. The proposed methods yield good results and the quality of the estimates obtained using these methods is compared with the existing method.  相似文献   

14.
由散焦图像求深度是计算机视觉中一个非常重要的课题。散焦图像中点的模糊程度随物体的深度而变化,因此可以利用散焦图像估计物体的深度信息,该方法不存在立体视觉和运动视觉中对应点的匹配问题,具有很好的应用前景。研究了一种基于散焦图像空间的深度估计算法:将散焦成像描述成热扩散过程,借助形变函数将两幅散焦图像扩张成一个散焦空间,再估计出形变参数,进而恢复物体的深度信息。最后利用实验验证了算法的有效性。  相似文献   

15.
Li  Lin  Yu  Xiaolei  Liu  Zhenlu  Zhao  Zhimin  Zhang  Ke  Zhou  Shanhao 《Multimedia Tools and Applications》2021,80(21-23):32149-32169

The dynamic non-uniform blur caused by Radio Frequency Identification (RFID) multi-label motion seriously affects the identification and location of labels. It is an ill-posed inverse problem for that the blur kernel and sharp image are unknown. The traditional method of removing the blur is very time-consuming. In this work, we propose Multi-scale Recursive Codec Network based on the Authority Parameter (MRCN-AP) to deblur RFID multi-label images in a vision-based RFID multi-label 3D measurement system. This network is composed of a stack of three encoder-decoder subnets of different scales, which restores the blurry image in an end-to-end manner, and extracts the detail edge on each scale effectively from coarse to fine. The proposed authority parameters reduce the parameters memory of redundant networks and improve the speed of the deblurring network. Also, we propose new large-scale RFID multi-label blur-sharp image pairs captured by the dual CCD camera. The proposed model is implemented on an extended dataset. We prove that our method improves the speed by at least 0.55 s, and also increases Peak Signal to Noise Ratio (PSNR) by 2.43dB. Besides, better visual effects are obtained by MRCN-AP deblurring network for RFID multi-label image, which is more conducive to subsequent positioning and optimization.

  相似文献   

16.
We present user‐controllable and plausible defocus blur for a stochastic rasterizer. We modify circle of confusion coefficients per vertex to express more general defocus blur, and show how the method can be applied to limit the foreground blur, extend the in‐focus range, simulate tilt‐shift photography, and specify per‐object defocus blur. Furthermore, with two simplifying assumptions, we show that existing triangle coverage tests and tile culling tests can be used with very modest modifications. Our solution is temporally stable and handles simultaneous motion blur and depth of field.  相似文献   

17.
在图像去模糊问题中,显著边缘结构对图像的模糊核估计具有重要的作用.本文提出一种基于深度编码-解码器的图像模糊核估计算法.首先,通过构建训练数据集对深度编码-解码器进行训练,进而自适应地获得模糊图像的显著边缘结构;接着,结合显著边缘结构和模糊图像,利用L2范数正则化对模糊核进行估计;最后,利用超拉普拉斯先验和所估计的模糊核对清晰图像进行估计.与传统的方法相比,所提出的方法不需要多尺度迭代框架.实验结果表明,所提出的算法在获得较好的显著边缘结构以及清晰图像的同时,能够减少算法计算的时间.  相似文献   

18.
部分模糊核已知的混合模糊图像复原算法   总被引:6,自引:2,他引:4  
真实环境中得到的图像常会受到多种模糊降质过程的影响,导致成像质量下降,为此提出一种图像退化模型及相应的复原算法.与传统的级联方式不同,该算法假定模糊核函数是散焦与运动模糊的加权和形式;在散焦模糊给定的情况下,使用广义拉普拉斯分布作为运动模糊的统计模型,并在期望最大化(EM)算法框架下估计模糊核函数;最后利用估计的模糊核函数进行图像复原.实验结果表明,采用文中的复原算法能够较为准确地辨识出模糊核,提升了图像复原的效果.  相似文献   

19.
目的 散焦模糊检测致力于区分图像中的清晰与模糊像素,广泛应用于诸多领域,是计算机视觉中的重要研究方向。待检测图像含复杂场景时,现有的散焦模糊检测方法存在精度不够高、检测结果边界不完整等问题。本文提出一种由粗到精的多尺度散焦模糊检测网络,通过融合不同尺度下图像的多层卷积特征提高散焦模糊的检测精度。方法 将图像缩放至不同尺度,使用卷积神经网络从每个尺度下的图像中提取多层卷积特征,并使用卷积层融合不同尺度图像对应层的特征;使用卷积长短时记忆(convolutional long-short term memory,Conv-LSTM)层自顶向下地整合不同尺度的模糊特征,同时生成对应尺度的模糊检测图,以这种方式将深层的语义信息逐步传递至浅层网络;在此过程中,将深浅层特征联合,利用浅层特征细化深一层的模糊检测结果;使用卷积层将多尺度检测结果融合得到最终结果。本文在网络训练过程中使用了多层监督策略确保每个Conv-LSTM层都能达到最优。结果 在DUT (Dalian University of Technology)和CUHK (The Chinese University of Hong Kong)两个公共的模糊检测数据集上进行训练和测试,对比了包括当前最好的模糊检测算法BTBCRL (bottom-top-bottom network with cascaded defocus blur detection map residual learning),DeFusionNet (defocus blur detection network via recurrently fusing and refining multi-scale deep features)和DHDE (multi-scale deep and hand-crafted features for defocus estimation)等10种算法。实验结果表明:在DUT数据集上,本文模型相比于DeFusionNet模型,MAE (mean absolute error)值降低了38.8%,F0.3值提高了5.4%;在CUHK数据集上,相比于LBP (local binary pattern)算法,MAE值降低了36.7%,F0.3值提高了9.7%。通过实验对比,充分验证了本文提出的散焦模糊检测模型的有效性。结论 本文提出的由粗到精的多尺度散焦模糊检测方法,通过融合不同尺度图像的特征,以及使用卷积长短时记忆层自顶向下地整合深层的语义信息和浅层的细节信息,使得模型在不同的图像场景中能得到更加准确的散焦模糊检测结果。  相似文献   

20.
This paper describes a fast rendering algorithm for verification of spectacle lens design. Our method simulates refraction corrections of astigmatism as well as myopia or presbyopia. Refraction and defocus are the main issues in the simulation. For refraction, our proposed method uses per-vertex basis ray tracing which warps the environment map and produces a real-time refracted image which is subjectively as good as ray tracing. Conventional defocus simulation was previously done by distribution ray tracing and a real-time solution was impossible. We introduce the concept of a blur field, which we use to displace every vertex according to its position. The blurring information is precomputed as a set of field values distributed to voxels which are formed by evenly subdividing the perspective projected space. The field values can be determined by tracing a wavefront from each voxel through the lens and the eye, and by evaluating the spread of light at the retina considering the best human accommodation effort. The blur field is stored as texture data and referred to by the vertex shader that displaces each vertex. With an interactive frame rate, blending the multiple rendering results produces a blurred image comparable to distribution ray tracing output.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号