共查询到20条相似文献,搜索用时 296 毫秒
1.
Yuille A Coughlan JM Konishi S 《Journal of the Optical Society of America. A, Optics, image science, and vision》2003,20(1):24-31
We address the visual ambiguities that arise in estimating object and scene structure from a set of images when the viewpoint and lighting are unknown. We obtain a novel viewpoint-lighting ambiguity called the KGBR that corresponds to a group of three-dimensional affine transformations on the object or scene geometry combined with transformations on the object or scene albedo. Our analysis assumes orthographic projection with an affine camera model. We include photometric cues, such as shadowing and shading, that we model using Lambertian reflectance functions with shadows (cast and attached) and multiple light sources (but no interreflections). We relate the KGBR to affine ambiguities in estimating shape and to the generalized bas-relief (GBR) ambiguity. 相似文献
2.
Quantitative analysis of error bounds in the recovery of depth from defocused images 总被引:1,自引:0,他引:1
Rajagopalan AN Chaudhuri S Chellappa R 《Journal of the Optical Society of America. A, Optics, image science, and vision》2000,17(10):1722-1731
Depth from defocus involves estimating the relative blur between a pair of defocused images of a scene captured with different lens settings. When a priori information about the scene is available, it is possible to estimate the depth even from a single image. However, experimental studies indicate that the depth estimate improves with multiple observations. We provide a mathematical underpinning to this evidence by deriving and comparing the theoretical bounds for the error in the estimate of blur corresponding to the case of a single image and for a pair of defocused images. A new theorem is proposed that proves that the Cramér-Rao bound on the variance of the error in the estimate of blur decreases with an increase in the number of observations. The difference in the bounds turns out to be a function of the relative blurring between the observations. Hence one can indeed get better estimates of depth from multiple defocused images compared with those using only a single image, provided that these images are differently blurred. Results on synthetic as well as real data are given to further validate the claim. 相似文献
3.
It has been shown many times that using different versions of a scene perturbed with different blurs improved the quality of a restored image compared with using a single blurred image. We focus on large defocus blurs, and we first consider a case in which two different blurring kernels are used. We analyze with numerical simulations the influence of the relative diameter of both kernels on the quality of restoration. We then quantitatively evaluate how the two-kernel approach improves the robustness of restoration to a difference between the kernels used in designing the algorithm and the actual kernels that have perturbed the image. We finally show that using three different kernels may not improve the restoration performance compared with the two-kernel approach but still improves the robustness to kernel estimation. 相似文献
4.
Abstract This paper will study the restoration of a single image blurred by radial motion with space‐variant line spread functions (SVLSFs). First of all, models of SVLSFs are derived including the focus of expansion or contraction with constant velocity, acceleration, and deceleration. Then, the rectangular‐to‐polar lattice transformation is discussed to simplify the restoration process. Finally, we demonstrate the restoration of such motion‐blurred images generated by the computer or taken from real scenery to validate the proposed methods. 相似文献
5.
Mukaigawa Y Ishii Y Shakunaga T 《Journal of the Optical Society of America. A, Optics, image science, and vision》2007,24(10):3326-3334
We propose a method for analyzing photometric factors, such as diffuse reflection, specular reflection, attached shadow, and cast shadow. For analyzing real images, we utilize the photometric linearization method, which was originally proposed for image synthesis. First, we show that each pixel can be photometrically classified by a simple comparison of the pixel intensity. Our classification algorithm requires neither 3D shape information nor color information of the scene. Then, we show that the accuracy of the photometric linearization can be improved by introducing a new classification-based criterion to the linearization process. Experimental results show that photometric factors can be correctly classified without any special devices. A further experiment shows that the proposed method is effective for photometric stereo. 相似文献
6.
《Journal of Modern Optics》2013,60(9):1231-1236
The unconstrained single deblurring filter for coherent optical restoration of blurred image is produced in a modified Rayleigh interferometer with blurred point spread function (PSF) h(x, y) and doubly blurred PSF h(x, y)=h(x, y)?h(x, y),? denoting convolution. The linear-motion blurred images and the defocusing blurred images are corrected with the present holographic filter, and it is shown that the restored images are significantly improved. 相似文献
7.
A. H. Lettington M. P. Rollason S. Tzimopoulou E. Boukouvala 《Journal of Modern Optics》2013,60(5):931-938
Abstract It is known that the distribution of intensity gradients along separate horizontal and vertical directions in an image of a general scene often has a sharp peak with a long tail. This property, can be described by a Lorentzian probability function, and is the basis of an efficient nonlinear one-dimensional restoration algorithm. It can also superresolve a two-dimensional separable image. This paper discusses the gradient distribution in a general two-dimensional image and shows that the distribution of the maximum gradient at any picture point is also Lorentzian. This has been used to develop an iterative two-dimensional restoration algorithm. It starts by evaluating the likelihood of the intensity gradients within a Wiener filtered image. Then a nonlinear correction term is introduced which increases this likelihood under mean square error criterion. The method is applied to synthetic images and to a 94 GHz passive millimetre wave image. This new two-dimensional method is shown to be superior to the previous one-dimensional algorithm which had to be applied separately along two orthogonal directions. 相似文献
8.
9.
The Lorentzian nonlinear restoration algorithm is based on the empirical observation that the gradients in a general image have a Lorentzian probability density. In this paper an attempt is made to justify this observation by modelling the scene as a series of random sized features each with a random change in intensity across them. The high spatial frequency content of an image exists largely at edges and sharp features and is often lost due to aberrations in the optics of imaging systems. Nonlinear image restoration may be used to recover these frequencies, the most effective methods being constrained. In many nonlinear restoration techniques the amount of high spatial frequency content introduced into the restored image is uncontrolled. This problem has been overcome through the use of the Lorentzian algorithm, which imposes a statistical constraint on the distribution of gradients within the restored image. Images are presented to demonstrate the effectiveness of this method. 相似文献
10.
Harnessing defocus blur to recover high-resolution information in shape-from-focus technique 总被引:1,自引:0,他引:1
Traditional shape-from-focus (SFF) uses focus as the singular cue to derive the shape profile of a 3D object from a sequence of images. However, the stack of low-resolution (LR) observations is space-variantly blurred because of the finite depth of field of the camera. The authors propose to exploit the defocus information in the stack of LR images to obtain a super-resolved image as well as a high-resolution (HR) depth map of the underlying 3D object. Appropriate observation models are used to describe the image formation process in SFF. Local spatial dependencies of the intensities of pixels and their depth values are accounted for by modelling the HR image and the HR structure as independent Markov random fields. Taking as input the LR images from the stack and the LR depth map, the authors first obtain the super-resolved image of the 3D specimen and use it subsequently to reconstruct a HR depth profile of the object. 相似文献
11.
Most images may not be sharp and clear due to various reasons like noise interference and is said to be in a blurred condition. Image de-blurring is fundamental in making pictures sharp and useful. Normally, along with the input blurred image, Point Spread Function (PSF) of the original image is required for the process of restoration and de-blurring. In this paper, we introduce a technique for image restoration by Richardson–Lucy algorithm where the optimised PSF is generated by the use of Genetic Algorithm (GA). Use of optimised PSF ensures that our proposed technique does not need the original image for the de-blurring purpose and can be greatly beneficial in the real time scenario cases. The dataset used for the evaluation of the proposed technique are real 3D images and the evaluation metrics used are peak signal-to-noise ratio (PSNR), Second-Derivative like Measure of Enhancement (SDME) and mean squared error (MSE). The technique is compared with existing techniques such as de-convolution method, regularisation filter, Wiener filter and Richardson–Lucy algorithm. From the results, we can observe that our proposed technique has achieved higher PSNR and SDME values and lower MSE values when compared with other techniques. We have achieved average PSNR of 70·94, SDME of 71·46 and MSE of 0·0063. The values obtained show the superior performance of the proposed technique. 相似文献
12.
Samia Riaz Muhammad Waqas Anwar Irfan Riaz Hyun-Woo Kim Yunyoung Nam Muhammad Attique Khan 《计算机、材料和连续体(英文)》2022,70(1):1-14
The captured outdoor images and videos may appear blurred due to haze, fog, and bad weather conditions. Water droplets or dust particles in the atmosphere cause the light to scatter, resulting in very limited scene discernibility and deterioration in the quality of the image captured. Currently, image dehazing has gained much popularity because of its usability in a wide variety of applications. Various algorithms have been proposed to solve this ill-posed problem. These algorithms provide quite promising results in some cases, but they include undesirable artifacts and noise in haze patches in adverse cases. Some of these techniques take unrealistic processing time for high image resolution. In this paper, to achieve real-time halo-free dehazing, fast and effective single image dehazing we propose a simple but effective image restoration technique using multiple patches. It will improve the shortcomings of DCP and improve its speed and efficiency for high-resolution images. A coarse transmission map is estimated by using the minimum of different size patches. Then a cascaded fast guided filter is used to refine the transmission map. We introduce an efficient scaling technique for transmission map estimation, which gives an advantage of very low-performance degradation for a high-resolution image. For performance evaluation, quantitative, qualitative and computational time comparisons have been performed, which provide quiet faithful results in speed, quality, and reliability of handling bright surfaces. 相似文献
13.
The analytically continued Fourier transform of a two-dimensional image vanishes to zero on a two-dimensional surface embedded in a four-dimensional space. This surface uniquely characterizes the image and is known as a zero sheet. An algorithm is described that employs the zero-sheet concept to blindly deconvolve an ensemble of differently blurred images. To overcome the difficulty of operating within a four-dimensional space, we calculate projections of the zero sheets, known as zero tracks. The zero tracks of each member of the ensemble are superimposed on a plane. The zero tracks that pertain to the original image are similar for every blurred and contaminated image. By contrast those associated with the blurring vary widely across the ensemble. A method of selecting the appropriate zero tracks in order to reconstruct an estimate of the original image is presented. Preliminary results for small positive images suggest that this deconvolution technique may be successful even when the level of contamination is significant. 相似文献
14.
General restoration filter for vibrated-image restoration 总被引:4,自引:0,他引:4
Mechanical vibrations are often the principal cause of image degradation. Low temporal-frequency mechanical vibrations involve random image degradation that depends on the instant of exposure. Exact restoration requires the calculation of a specific filter unique to each vibrated image. To calculate the restoration filter for each image, one needs the specific optical transfer function unique to the motion in the image. Therefore the instant of exposure and the motion function have to be measured or estimated by some other means. We develop a restoration filter for individual images blurred randomly by low-frequency mechanical vibrations. The filter is independent of the instant of exposure. The filter is designed to give its best performance averaged over a complete ensemble of vibrated images. Although when applying the new filter to any vibrated image the restoration achieved is slightly poorer than that achieved with an exact filter unique to the specific motion function, the new filter has the advantage of simplicity. 相似文献
15.
Image restoration is aimed to recover the original scene from its degraded version. This paper presents a new method for image restoration. In this technique, an evaluation function which combines a scaled residual with space‐variant regularization is established and minimized using a Hopfield network to obtain a restored image from a noise corrupted and blurred image. Simulation results demonstrate that the proposed evaluation function leads to a more efficient restoration process which offers a fast convergence and improved restored image quality. © 2003 Wiley Periodicals, Inc. Int J Imaging Syst Technol 12, 247–253, 2002; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/ima.10034 相似文献
16.
Method for recovering blurred images taken through turbulent media, such as the atmosphere is presented. In this method amplitude and phase of the object's image are separately determined by special techniques of data processing, and finally the original image can be recovered. Principle and algorithm are described and the techniques to determine amplitude and phase are introduced by use of the results of computer simulations as well as light propagation experiments. As a demonstration to verify the utility of this method, images recovered by this method by use of data taken through a telescope of 1.5 m diameter, and the results of a computer simulation with atmospheric turbulence are shown. The results suggest that the presented method is well suited for the retrieval of blurred images. 相似文献
17.
18.
Bringier B Helbert D Khoudeir M 《Journal of the Optical Society of America. A, Optics, image science, and vision》2008,25(3):566-574
Textured surface analysis is essential for many applications. We present a three-dimensional recovery approach for real textured surfaces based on photometric stereo. The aim is to be able to measure the textured surfaces with a high degree of accuracy. For this, we use a color digital sensor and principles of color photometric stereo. This method uses a single color image, instead of a sequence of gray-scale images, to recover the surface of the three dimensions. It can thus be integrated into dynamic systems where there is significant relative motion between the object and the camera. To evaluate the performance of our method, we compare it on real textured surfaces to traditional photometric stereo using three images. We thus show that it is possible to have similar results with just one color image. 相似文献
19.
20.
A computational integral imaging reconstruction technique can reconstruct a set of plane images of three-dimensional (3-D) objects along the output plane, in which only the plane object image (POI) reconstructed on the right planes where the objects were positioned is highly focused, whereas the other POIs reconstructed away from these planes are unfocused and blurred. In fact, these blurred POIs act as additional noises to other object images reconstructed on different output planes, so that the resolution of reconstructed object images should be considerably deteriorated. In this paper, a novel approach is proposed to effectively reduce the blurred images occurring in the focused POIs by employing a blur metric. From the estimated blur metric of each reconstructed POI, the right output planes where the objects were located can be detected. In addition, with an estimated blur metric, focused POIs can be adaptively eroded by a simple gray level erosion operation because it reduces regional expansion caused by the blur effect. The gray values of the eroded POIs are then finally remapped by referencing the original POIs. Some experiments revealed an average increase of 1.95 dB in the peak signal-to-noise ratio in the remapped POIs compared with that of the originally reconstructed POIs, and that the original forms of the object images in the remapped POIs could be preserved even after they had gone through an erosion operation. This feasibility test of the proposed scheme finally suggests a possibility of its application to robust detection and recognition of 3-D objects in a scene. 相似文献