首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A new model (called multi-component blurring— or MCB) to account for image blurring effects due to depth discontinuities is presented. We show that blurring processes operating in the vicinity of large depth discontinuties can give rise to emergent image details, quite distinguishable but nevertheless unexplained by previously available blurring models. In other words, the maximum principle for scale space1 does not hold. It is argued that blurring in high-relief 3D scenes should be more accurately modelled as a multi-component process. We present results from extensive and carefully designed experiments, with many images of real scenes taken by a CCD camera with typical parameters. These results have consistently supported our new blurring model. Due care was taken to ensure that the image phenomena observed are mainly due to de-focussing and not due to mutual illuminations2, specularity3, objects' ‘finer’ structures, coherent diffraction, or incidental image noises4. We also hypothesize on the role of blurring on human depth-from-blur perception, based on correlation with recent results from human blur perception5.  相似文献   

2.
In auto‐stereoscopic multi‐views, blurring occurs due to the incomplete separation of views for non‐zero depths. How this blur affects a 3D image was investigated using the commercial multi‐view 3D. The 3D input signal consisted of the square pattern and the gratings of various width and gray level values of G1 and G2. The various combinations of G1 and G2 were used to investigate the dependence of blur on gray G1 and G2 values. The 3D depth caused blurring, which caused a decrease in contrast modulation. Hence, the 3D resolution determined from contrast modulation was affected by the depth and became worse with increasing depth. Therefore, 3D resolution may be used to define the depth range within which the image degradation due to blurring is acceptable. Blur edge width values at the boundaries of gray G1 and G2 were measured and found to be similar irrespective of G1 and G2 values at the same depth. This was because blur was caused by the incomplete separation of views that are independent of G1 and G2. Hence, the blurriness of the observed 3D image is determined only by the depth. The 3D resolution and blur edge width might be useful to characterize the performance of auto‐stereoscopic multi‐view 3D.  相似文献   

3.
彭天奇  禹晶  肖创柏 《自动化学报》2022,48(10):2508-2525
在模糊核未知的情况下对模糊图像进行复原称为盲解卷积问题,这是一个欠定逆问题,现有的大部分盲解卷积算法利用图像的各种先验知识约束问题的解空间.由于清晰图像的跨尺度自相似性强于模糊图像的跨尺度自相似性,且降采样模糊图像与清晰图像具有更强的相似性,本文提出了一种基于跨尺度低秩约束的单幅图像盲解卷积算法,利用图像跨尺度自相似性,在降采样图像中搜索相似图像块构成相似图像块组,从整体上对相似图像块组进行低秩约束,作为正则项加入到图像盲解卷积的目标函数中,迫使重建图像的边缘接近清晰图像的边缘.本文算法没有对噪声进行特殊处理,由于低秩约束更好地表示了数据的全局结构特性,因此避免了盲解卷积过程受噪声的干扰.在模糊图像和模糊有噪图像上的实验验证了本文的算法能够解决大尺寸模糊核的盲复原并对噪声具有良好的鲁棒性.  相似文献   

4.
A method is presented for detecting blurred edges in images and for estimating the following edge parameters: position, orientation, amplitude, mean value, and edge slope. The method is based on a local image decomposition technique called a polynomial transform. The information that is made explicit by the polynomial transform is well suited to detect image features, such as edges, and to estimate feature parameters. By using the relationship between the polynomial coefficients of a blurred feature and those of the a priori assumed (unblurred) feature in the scene, the parameters of the blurred feature can be estimated. The performance of the proposed edge parameter estimation method in the presence of image noise has been analyzed. An algorithm is presented for estimating the spread of a position-invariant Gaussian blurring kernel, using estimates at different edge locations over the image. First a single-scale algorithm is developed in which one polynomial transform is used. A critical parameter of the single-scale algorithm is the window size, which has to be chosen a priori. Since the reliability of the estimate for the spread of the blurring kernel depends on the ratio of this spread to the window size, it is difficult to choose a window of appropriate size a priori. The problem is overcome by a multiscale blur estimation algorithm where several polynomial transforms at different scales are applied, and the appropriate scale for analysis is chosen a posteriori. By applying the blur estimation algorithm to natural and synthetic images with different amounts of blur and noise, it is shown that the algorithm gives reliable estimates for the spread of the blurring kernel even at low signal-to-noise ratios.  相似文献   

5.
Existing frequency-domain-oriented methods of parameter identification for uniform linear motion blur (ULMB) images usually dealt with special scenarios. For example, blur-kernel directions were horizontal or vertical, or degraded images were of foursquare dimension. This excludes those identification methods from being applied to real images, especially to estimate undersized or oversized blur kernels. Pointing against the limitations of blur-kernel identifications, discrete Fourier transform (DFT)-based blur-kernel estimation methods are proposed in this paper. We analyze in depth the Fourier frequency response of generalized ULMB kernels, demonstrate in detail its related phase form and properties thereof, and put forward the concept of quasi-cepstrum. On this basis, methods of estimating ULMB-kernel parameters using amplitude spectrum and quasi-cepstrum are presented, respectively. The quasi-cepstrum-oriented approach increases the identifiable blur-kernel length, up to a maximum of half the diagonal length of the image. Meanwhile, directing toward the image of undersized ULMB, an improved method based on quasi-cepstrum is presented, which ameliorates the identification quality of undersized ULMB kernels. The quasi-cepstrum-oriented approach popularizes and applies the simulation-experimentfocused DFT theory to the estimation of real ULMB images. Compared against the amplitude-spectrum-oriented method, the quasi-cepstrum-oriented approach is more convenient and robust, with lower identification errors and of better noiseimmunity.  相似文献   

6.
吴章平  刘本永 《计算机应用》2016,36(4):1111-1114
针对散焦模糊图像的复原问题,提出一种基于灰度平均梯度与粒子群优化(PSO)算法相结合的散焦图像模糊参数估计方法。首先,利用PSO算法随机生成一群不同模糊半径的点扩散函数,分别用维纳滤波算法处理模糊图像,得到一系列复原图像,并计算其对应的灰度平均梯度值;然后,利用图像清晰度与图像灰度平均梯度值成正变关系这一特点,以复原图像的灰度平均梯度值作为粒子群算法的适应度函数值,找出使适应度函数最大的粒子所对应的模糊半径作为最后的估计结果。实验结果表明,与频谱估计方法和倒频谱估计方法相比,所提算法能够更精确地估计出模糊参数,尤其是在大尺度模糊半径的情况下,所提算法估计的精度更高。  相似文献   

7.
This paper presents an algorithm for a dense computation of the difference in blur between two images. The two images are acquired by varying the intrinsic parameters of the camera. The image formation system is assumed to be passive. Estimation of depth from the blur difference is straightforward. The algorithm is based on a local image decomposition technique using the Hermite polynomial basis. We show that any coefficient of the Hermite polynomial computed using the more blurred image is a function of the partial derivatives of the other image and the blur difference. Hence, the blur difference is computed by resolving a system of equations. The resulting estimation is dense and involves simple local operations carried out in the spatial domain. The mathematical developments underlying estimation of the blur in both 1D and 2D images are presented. The behavior of the algorithm is studied for constant images, step edges, line edges, and junctions. The selection of its parameters is discussed. The proposed algorithm is tested using synthetic and real images. The results obtained are accurate and dense. They are compared with those obtained using an existing algorithm.  相似文献   

8.
In this paper, we propose a MAP-Markov random field (MRF) based scheme for recovering the depth and the focused image of a scene from two defocused images. The space-variant blur parameter and the focused image of the scene are both modeled as MRFs and their MAP estimates are obtained using simulated annealing. The scheme is amenable to the incorporation of smoothness constraints on the spatial variations of the blur parameter as well as the scene intensity. It also allows for inclusion of line fields to preserve discontinuities. The performance of the proposed scheme is tested on synthetic as well as real data and the estimates of the depth are found to be better than that of the existing window-based depth from defocus technique. The quality of the space-variant restored image of the scene is quite good even under severe space-varying blurring conditions  相似文献   

9.
基于好的还原图像是倾向于清晰图像而不是模糊图像这样一个事实,提出了一种基于多种先验的有效的盲图像去模糊方法。目前比较好的去模糊方法对于特定场景图像的复原效果不理想,存在一些模糊,包括轮廓和细节表示不清晰。为解决这些问题,结合多个先验知识,包括暗通道先验、强度图像先验和梯度图像先验知识,并加以权衡,就可以在复原过程中为轮廓和细节提供更多的先验信息,并把这个先验知识放到MAP的框架中,通过不断地迭代得到估计模糊核,再利用非盲的图像复原方法对原图像复原。在泛化处理自然环境的多种场景中,本文方法相较于目前比较先进的方法,结果的轮廓和细节都有不错的提升。  相似文献   

10.
Edge and Depth from Focus   总被引:2,自引:0,他引:2  
This paper proposes a novel method to obtain the reliable edge and depth information by integrating a set of multi-focus images, i.e., a sequence of images taken by systematically varying a camera parameter focus. In previous work on depth measurement using focusing or defocusing, the accuracy depends upon the size and location of local windows where the amount of blur is measured. In contrast, no windowing is needed in our method; the blur is evaluated from the intensity change along corresponding pixels in the multi-focus images. Such a blur analysis enables us not only to detect the edge points without using spatial differentiation but also to estimate the depth with high accuracy. In addition, the analysis result is stable because the proposed method involves integral computations such as summation and least-square model fitting. This paper first discusses the fundamental properties of multi-focus images based on a step edge model. Then, two algorithms are presented: edge detection using an accumulated defocus image which represents the spatial distribution of blur, and depth estimation using a spatio-focal image which represents the intensity distribution along focus axis. The experimental results demonstrate that the highly precise measurement has been achieved: 0.5 pixel position fluctuation in edge detection and 0.2% error at 2.4 m in depth estimation.  相似文献   

11.
The recovery of depth from defocused images involves calculating the depth of various points in a scene by modeling the effect that the focal parameters of the camera have on images acquired with a small depth of field. In the existing methods on depth from defocus (DFD), two defocused images of a scene are obtained by capturing the scene with different sets of camera parameters. Although the DFD technique is computationally simple, the accuracy is somewhat limited compared to the stereo algorithms. Further, an arbitrary selection of the camera settings can result in observed images whose relative blurring is insufficient to yield a good estimate of the depth. In this paper, we address the DFD problem as a maximum likelihood (ML) based blur identification problem. We carry out performance analysis of the ML estimator and study the effect of the degree of relative blurring on the accuracy of the estimate of the depth. We propose a criterion for optimal selection of camera parameters to obtain an improved estimate of the depth. The optimality criterion is based on the Cramer-Rao bound of the variance of the error in the estimate of blur. A number of simulations as well as experimental results on real images are presented to substantiate our claims.  相似文献   

12.
图像模糊是指在图像捕捉或传输过程中,由于镜头或相机运动、光照条件等因素导致图像失去清晰度和细节,从而影响图像的质量和可用性。为了消除这种影响,图像去模糊技术应运而生。其目的在于通过构建计算机数学模型来衡量图像的模糊信息,从而自动预测去模糊后的清晰图像。图像去模糊算法的研究发展不仅为计算机视觉领域的其他任务提供了便利,同时也为生活领域提供了便捷和保障,如安全监控等。1)回顾了整个图像去模糊领域的发展历程,对盲图像去模糊和非盲图像去模糊中具有影响力的算法进行论述和分析。2)讨论了图像模糊的常见原因以及去模糊图像的质量评价方法。3)全面阐述了传统方法和基于深度学习方法的基本思想,并针对图像非盲去模糊和图像盲去模糊两方面的一些文献进行了综述。其中,基于深度学习的方法包括基于卷积神经网络、基于循环神经网络、基于生成式对抗网络和基于Transformer的方法等。4)简要介绍了图像去模糊领域的常用数据集并比较分析了一些代表性图像去模糊算法的性能。5)探讨了图像去模糊领域所面临的挑战,并对未来的研究方法进行了展望。  相似文献   

13.
离焦测距算法是一种用于恢复场景深度信息的常用算法。传统的离焦测距算法通常需要采集多幅离焦图像,实际应用中具有很大的制约性。文中基于局部模糊估计提出单幅离焦图像深度恢复算法。基于局部模糊一致性的假设,本文采用简单而有效的两步法恢复输入图像的深度信息:1)通过求取输入离焦图和利用已知高斯核再次模糊图之间的梯度比得到边缘处稀疏模糊图 2)将边缘位置模糊值扩离至全部图像,完整的相对深度信息即可恢复。为了获得准确的场景深度信息,本文加入几何条件约束、天空区域提取策略来消除颜色、纹理以及焦点平面歧义性带来的影响,文中对各种类型的图片进行对比实验,结果表明该算法能在恢复深度信息的同时有效抑制图像中的歧义性。  相似文献   

14.
15.
图像作为人类信息交流的载体,包含大量信息元素,图像在获取过程中,会因相机抖动、物体位移等原因产生运动模糊,导致图像无法正确传递信息。图像运动模糊还原技术可将此类退化图像修复还原,是当前计算机视觉及图像处理领域的热点。通过模糊核是否已知将图像运动模糊还原方法分为两类,详细阐述了图像运动模糊还原的概念,归纳梳理了近年来图像运动模糊还原技术的方法及研究现状,总结了各方法的优缺点并对经典方法实验结果进行了对比,对模糊核估计及模糊数据集等关键问题进行了分析并对其发展方向给出了建议,最后对图像运动模糊还原技术发展趋势进行了展望。  相似文献   

16.
Real-time focus range sensor   总被引:3,自引:0,他引:3  
Structures of dynamic scenes can only be recovered using a real-time range sensor. Depth from defocus offers an effective solution to fast and dense range estimation. However, accurate depth estimation requires theoretical and practical solutions to a variety of problems including recovery of textureless surfaces, precise blur estimation, and magnification variations caused by defocusing. Both textured and textureless surfaces are recovered using an illumination pattern that is projected via the same optical path used to acquire images. The illumination pattern is optimized to maximize accuracy and spatial resolution in computed depth. The relative blurring in two images is computed using a narrow-band linear operator that is designed by considering all the optical, sensing, and computational elements of the depth from defocus system. Defocus invariant magnification is achieved by the use of an additional aperture in the imaging optics. A prototype focus range sensor has been developed that has a workspace of 1 cubic foot and produces up to 512×480 depth estimates at 30 Hz with an average RMS error of 0.2%. Several experimental results are included to demonstrate the performance of the sensor  相似文献   

17.
Owing to recent advances in depth sensors and computer vision algorithms, depth images are often available with co-registered color images. In this paper, we propose a simple but effective method for obtaining an all-in-focus (AIF) color image from a database of color and depth image pairs. Since the defocus blur is inherently depth-dependent, the color pixels are first grouped according to their depth values. The defocus blur parameters are then estimated using the amount of the defocus blur of the grouped pixels. Given a defocused color image and its estimated blur parameters, the AIF image is produced by adopting the conventional pixel-wise mapping technique. In addition, the availability of the depth image disambiguates the objects located far or near from the in-focus object and thus facilitates image refocusing. We demonstrate the effectiveness of the proposed algorithm using both synthetic and real color and depth images.  相似文献   

18.
A novel technique for three-dimensional depth recovery based on two coaxial defocused images of an object with added pattern illumination is presented. The approach integrates object segmentation with depth estimation. Firstly segmentation is performed by a multiresolution based approach to isolate object regions from the background given the presence of blur and pattern illumination. The segmentation has three sub-procedures: image pyramid formation; linkage adaptation; and unsupervised clustering. These maximise the object recognition capability while ensuring accurate position information. For depth estimation, lower resolution information with a strong correlation to depth is fed into a three-layered neural network as input feature vectors and processed using a Back-Propagation algorithm. The resulting depth model of object recovery is then used with higher resolution data to obtain high accuracy depth measurements. Experimental results are presented that show low error rates and the robustness of the model with respect to pattern variation and inaccuracy in optical settings.  相似文献   

19.
目的 散焦模糊检测致力于区分图像中的清晰与模糊像素,广泛应用于诸多领域,是计算机视觉中的重要研究方向。待检测图像含复杂场景时,现有的散焦模糊检测方法存在精度不够高、检测结果边界不完整等问题。本文提出一种由粗到精的多尺度散焦模糊检测网络,通过融合不同尺度下图像的多层卷积特征提高散焦模糊的检测精度。方法 将图像缩放至不同尺度,使用卷积神经网络从每个尺度下的图像中提取多层卷积特征,并使用卷积层融合不同尺度图像对应层的特征;使用卷积长短时记忆(convolutional long-short term memory,Conv-LSTM)层自顶向下地整合不同尺度的模糊特征,同时生成对应尺度的模糊检测图,以这种方式将深层的语义信息逐步传递至浅层网络;在此过程中,将深浅层特征联合,利用浅层特征细化深一层的模糊检测结果;使用卷积层将多尺度检测结果融合得到最终结果。本文在网络训练过程中使用了多层监督策略确保每个Conv-LSTM层都能达到最优。结果 在DUT (Dalian University of Technology)和CUHK (The Chinese University of Hong Kong)两个公共的模糊检测数据集上进行训练和测试,对比了包括当前最好的模糊检测算法BTBCRL (bottom-top-bottom network with cascaded defocus blur detection map residual learning),DeFusionNet (defocus blur detection network via recurrently fusing and refining multi-scale deep features)和DHDE (multi-scale deep and hand-crafted features for defocus estimation)等10种算法。实验结果表明:在DUT数据集上,本文模型相比于DeFusionNet模型,MAE (mean absolute error)值降低了38.8%,F0.3值提高了5.4%;在CUHK数据集上,相比于LBP (local binary pattern)算法,MAE值降低了36.7%,F0.3值提高了9.7%。通过实验对比,充分验证了本文提出的散焦模糊检测模型的有效性。结论 本文提出的由粗到精的多尺度散焦模糊检测方法,通过融合不同尺度图像的特征,以及使用卷积长短时记忆层自顶向下地整合深层的语义信息和浅层的细节信息,使得模型在不同的图像场景中能得到更加准确的散焦模糊检测结果。  相似文献   

20.
为了提高离焦模糊图像复原清晰度,提出一种基于频谱预处理与改进霍夫变换的 离焦模糊盲复原算法。首先改进模糊图像频谱预处理策略,降低了噪声对零点暗圆检测的影响。 然后改进霍夫变换圆检测算法,在降低算法复杂度的同时,增强了模糊半径估计的准确性。最 后利用混合特性正则化复原图像模型对模糊图像进行迭代复原,使复原图像的边缘细节更加清 晰。实验结果表明,提出的模糊半径估计方法较其他方法平均误差更小,改进的频谱预处理策 略更有利于零点暗圆检测,改进的霍夫变换圆检测算法模糊半径估计精度更高,所提算法对已 知相机失焦的小型无人机拍摄的离焦模糊图像具有更好的复原效果。针对离焦模糊图像复原, 通过理论分析和实验验证了改进的模糊半径估计方法的鲁棒性强,所提算法的复原效果较好。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号