首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The Focal Stack Transform integrates a 4D lightfield over a set of appropriately chosen 2D planes. The result of such integration is an image focused on a determined depth in 3D space. The set of such images is the Focal Stack of the lightfield. This paper studies the existence of an inverse for this transform. Such inverse could be used to obtain a 4D lightfield from a set of images focused on several depths of the scene. In this paper, we show that this inversion cannot be obtained for a general lightfield and introduce a subset of lightfields where this inversion can be computed exactly. We examine the numerical properties of such inversion process for general lightfields and examine several regularization approaches to stabilize the transform. Experimental results are provided for focal stacks obtained from several plenoptic cameras. From a practical point of view, results show how this inversion procedure can be used to recover, compress, and denoise the original 4D lightfield.  相似文献   

2.
Thus far the research of print-cam robust watermarking methods has focused on finding new methods for embedding and extracting the watermark. However, the capturing process itself, has been neglected in scientific research. In this paper, we propose a solution for the situation when the watermarked image has been captured in a wide angle and the depth of focus of the camera is not deep enough to capture the whole scene in-focus resulting in unfocused areas. The solution proposed here relies on a subfield of computational photography, namely all-in-focus imaging. All-in-focus images are generated by fusing multiple images from the same scene with different focus distances together, so that the object being photographed is fully in focus. Traditionally, the images to be fused are selected by hand from the focal stack or the whole stack is used for building the all-in-focus image. In mobile phone applications, computational resources are limited and using the full focal stack would result in long processing times and the manual selection of images would not be practical. In addition, we propose a method for optimizing the size of the focal stack and automatically selecting appropriate images for fusion. It is shown here that a watermark can still be recovered from the reconstructed all-in-focus image accurately.  相似文献   

3.
The focused plenoptic camera differs from the traditional plenoptic camera in that its microlenses are focused on the photographed object rather than at infinity. The spatio‐angular tradeoffs available with this approach enable rendering of final images that have significantly higher resolution than those from traditional plenoptic cameras. Unfortunately, this approach can result in visible artifacts when basic rendering is used. In this paper, we present two new methods that work together to minimize these artifacts. The first method is based on careful design of the optical system. The second method is computational and based on a new lightfield rendering algorithm that extracts the depth information of a scene directly from the lightfield and then uses that depth information in the final rendering. Experimental results demonstrate the effectiveness of these approaches.  相似文献   

4.
Digital images are normally taken by focusing on an object, resulting in defocused background regions. A popular approach to produce an all-in-focus image without defocused regions is to capture several input images at varying focus settings, and then fuse them into an image using offline image processing software. This paper describes an all-in-focus imaging method that can operate on digital cameras. The proposed method consists of an automatic focus-bracketing algorithm that determines at which focuses to capture images and an image-fusion algorithm that computes a high-quality all-in-focus image. While most previous methods use the focus measure calculated independently for each input image, the proposed method calculates the relative focus measure between a pair of input images. We note that a well-focused region in an image shows better contrast, sharpness, and details than the corresponding region that is defocused in another image. Based on the observation that the average filtered version of a well-focused region in an image shows a higher correlation to the corresponding defocused region in another image than the original well-focused version, a new focus measure is proposed. Experimental results of various sample image sequences show the superiority of the proposed measure in terms of both objective and subjective evaluation and the proposed method allows the user to capture all-in-focus images directly on their digital camera without using offline image processing software.  相似文献   

5.
《Advanced Robotics》2013,27(8):781-798
In this paper, an observational sensor system for tele-micro-operation is proposed with a dynamic focusing lens and a smart vision sensor using the 'depth from focus' criteria. Recently, micro-operations, such as for micro-surgery, DNA manipulations, etc., have gained in importance. However, the small depth of focus of the microscope produces poor observability. For example, if the focus is on the object, the actuator cannot be seen with the microscope. On the other hand, if the focus is on the actuator, the object cannot be observed. In this sense, the 'all-in-focus image', which holds the in-focused texture all over the image, is useful to observe the micro-environments with a microscope. One drawback of the all-in-focus image is that there is no information about the depth of objects. It is also important to obtain the depth map and show the three-dimensional (3D) micro virtual environments in real-time to actuate the micro objects intuitively. First, this paper reviews the criteria of 'depth from focus' to achieve the all-in-focus image and the 3D micro environments' simultaneous reconstruction. After evaluating the validity of this criteria with off-line simulation, a real-time virtual reality (VR) micro camera system is proposed to achieve the micro VR environments with the 'depth from focus' criteria. This system is constructed with a dynamic focusing lens, which can change its focal distance at a high frequency, and a smart vision system, which is capable of capturing and processing the image data in high speed with SIMD architecture.  相似文献   

6.
Multi-focus image fusion is an effective technique to integrate the relevant information from a set of images with the same scene, into a comprehensive image. The fused image would be more informative than any of the source images. In this paper, a novel fusion scheme based on image cartoon-texture decomposition is proposed. Multi-focus source images are decomposed into cartoon content and texture content by an improved iterative re-weighted decomposition algorithm. It can achieve rapid convergence and naturally approximates the morphological structure components. The proper fusion rules are constructed to fuse the cartoon content and the texture content, respectively. Finally, the fused cartoon and texture components are combined to obtain the all-in-focus image. This fusion processing can preserve morphological structure information from source images and performs few artifacts or additional noise. Our experimental results have clearly shown that the proposed algorithm outperforms many state-of-the-art methods, in terms of visual and quantitative evaluations.  相似文献   

7.
In this work, we propose a method that integrates depth and fisheye cameras to obtain a wide 3D scene reconstruction with scale in one single shot. The motivation of such integration is to overcome the narrow field of view in consumer RGB-D cameras and lack of depth and scale information in fisheye cameras. The hybrid camera system we use is easy to build and calibrate, and currently consumer devices with similar configuration are already available in the market. With this system, we have a portion of the scene with shared field of view that provides simultaneously color and depth. In the rest of the color image we estimate the depth by recovering the structural information of the scene. Our method finds and ranks corners in the scene combining the extraction of lines in the color image and the depth information. These corners are used to generate plausible layout hypotheses, which have real-world scale due to the usage of depth. The wide angle camera captures more information from the environment (e.g. the ceiling), which helps to overcome severe occlusions. After an automatic evaluation of the hypotheses, we obtain a scaled 3D model expanding the original depth information with the wide scene reconstruction. We show in our experiments with real images from both home-made and commercial systems that our method achieves high success ratio in different scenarios and that our hybrid camera system outperforms the single color camera set-up while additionally providing scale in one single shot.  相似文献   

8.
Depth from defocus (DFD) is a technique that restores scene depth based on the amount of defocus blur in the images. DFD usually captures two differently focused images, one near-focused and the other far-focused, and calculates the size of the defocus blur in these images. However, DFD using a regular circular aperture is not sensitive to depth, since the point spread function (PSF) is symmetric and only the radius changes with the depth. In recent years, the coded aperture technique, which uses a special pattern for the aperture to engineer the PSF, has been used to improve the accuracy of DFD estimation. The technique is often used to restore an all-in-focus image and estimate depth in DFD applications. Use of a coded aperture has a disadvantage in terms of image deblurring, since deblurring requires a higher signal-to-noise ratio (SNR) of the captured images. The aperture attenuates incoming light in controlling the PSF and, as a result, decreases the input image SNR. In this paper, we propose a new computational imaging approach for DFD estimation using focus changes during image integration to engineer the PSF. We capture input images with a higher SNR since we can control the PSF with a wide aperture setting unlike with a coded aperture. We confirm the effectiveness of the method through experimental comparisons with conventional DFD and the coded aperture approach.  相似文献   

9.
张旭东  李成云  汪义志  熊伟 《控制与决策》2018,33(12):2122-2130
光场相机通过单次拍摄可获取立体空间中的4维光场数据,利用光场的多视角特性可从中提取全光场图像的深度信息.然而,现有深度估计方法很少考虑场景中存在遮挡的情况,当场景中有遮挡时,提取深度信息的精度会明显降低.对此,提出一种新的基于多线索融合的光场图像深度提取方法以获取高精度的深度信息.首先分别利用自适应散焦算法和自适应匹配算法提取场景的深度信息;然后用峰值比作为置信以加权融合两种算法获取的深度;最后,用具有结构一致性的交互结构联合滤波器对融合深度图进行滤波,得到高精度深度图.合成数据集和真实数据集的实验结果表明,与其他先进算法相比,所提出的算法获取的深度图精度更高、噪声更少、图像边缘保持效果更好.  相似文献   

10.
We describe a novel multiplexing approach to achieve tradeoffs in space, angle and time resolution in photography. We explore the problem of mapping useful subsets of time‐varying 4D lightfields in a single snapshot. Our design is based on using a dynamic mask in the aperture and a static mask close to the sensor. The key idea is to exploit scene‐specific redundancy along spatial, angular and temporal dimensions and to provide a programmable or variable resolution tradeoff among these dimensions. This allows a user to reinterpret the single captured photo as either a high spatial resolution image, a refocusable image stack or a video for different parts of the scene in post‐processing. A lightfield camera or a video camera forces a‐priori choice in space‐angle‐time resolution. We demonstrate a single prototype which provides flexible post‐capture abilities not possible using either a single‐shot lightfield camera or a multi‐frame video camera. We show several novel results including digital refocusing on objects moving in depth and capturing multiple facial expressions in a single photo.  相似文献   

11.
Depth from defocus: A spatial domain approach   总被引:12,自引:1,他引:11  
A new method named STM is described for determining distance of objects and rapid autofocusing of camera systems. STM uses image defocus information and is based on a new Spatial-Domain Convolution/Deconvolution Transform. The method requires only two images taken with different camera parameters such as lens position, focal length, and aperture diameter. Both images can be arbitrarily blurred and neither of them needs to be a focused image. Therefore STM is very fast in comparison with Depth-from-Focus methods which search for the lens position or focal length of best focus. The method involves simple local operations and can be easily implemented in parallel to obtain the depth-map of a scene. STM has been implemented on an actual camera system named SPARCS. Experiments on the performance of STM and their results on real-world planar objects are presented. The results indicate that the accuracy of STM compares well with Depth-from-Focus methods and is useful in practical applications. The utility of the method is demonstrated for rapid autofocusing of electronic cameras.  相似文献   

12.
Portable light field (LF) cameras have demonstrated capabilities beyond conventional cameras. In a single snapshot, they enable digital image refocusing and 3D reconstruction. We show that they obtain a larger depth of field but maintain the ability to reconstruct detail at high resolution. In fact, all depths are approximately focused, except for a thin slab where blur size is bounded, i.e., their depth of field is essentially inverted compared to regular cameras. Crucial to their success is the way they sample the LF, trading off spatial versus angular resolution, and how aliasing affects the LF. We show that applying traditional multiview stereo methods to the extracted low-resolution views can result in reconstruction errors due to aliasing. We address these challenges using an explicit image formation model, and incorporate Lambertian and texture preserving priors to reconstruct both scene depth and its superresolved texture in a variational Bayesian framework, eliminating aliasing by fusing multiview information. We demonstrate the method on synthetic and real images captured with our LF camera, and show that it can outperform other computational camera systems.  相似文献   

13.
Multimedia Tools and Applications - Multi-focus image fusion, which aims to combine multi-focus images of a scene to construct an all-in-focus image, has become a major topic in image processing....  相似文献   

14.
离焦测距算法是一种用于恢复场景深度信息的常用算法。传统的离焦测距算法通常需要采集多幅离焦图像,实际应用中具有很大的制约性。文中基于局部模糊估计提出单幅离焦图像深度恢复算法。基于局部模糊一致性的假设,本文采用简单而有效的两步法恢复输入图像的深度信息:1)通过求取输入离焦图和利用已知高斯核再次模糊图之间的梯度比得到边缘处稀疏模糊图 2)将边缘位置模糊值扩离至全部图像,完整的相对深度信息即可恢复。为了获得准确的场景深度信息,本文加入几何条件约束、天空区域提取策略来消除颜色、纹理以及焦点平面歧义性带来的影响,文中对各种类型的图片进行对比实验,结果表明该算法能在恢复深度信息的同时有效抑制图像中的歧义性。  相似文献   

15.
The range of scene depths that appear focused in an image is known as the depth of field (DOF). Conventional cameras are limited by a fundamental trade-off between depth of field and signal-to-noise ratio (SNR). For a dark scene, the aperture of the lens must be opened up to maintain SNR, which causes the DOF to reduce. Also, today's cameras have DOFs that correspond to a single slab that is perpendicular to the optical axis. In this paper, we present an imaging system that enables one to control the DOF in new and powerful ways. Our approach is to vary the position and/or orientation of the image detector during the integration time of a single photograph. Even when the detector motion is very small (tens of microns), a large range of scene depths (several meters) is captured, both in and out of focus. Our prototype camera uses a micro-actuator to translate the detector along the optical axis during image integration. Using this device, we demonstrate four applications of flexible DOF. First, we describe extended DOF where a large depth range is captured with a very wide aperture (low noise) but with nearly depth-independent defocus blur. Deconvolving a captured image with a single blur kernel gives an image with extended DOF and high SNR. Next, we show the capture of images with discontinuous DOFs. For instance, near and far objects can be imaged with sharpness, while objects in between are severely blurred. Third, we show that our camera can capture images with tilted DOFs (Scheimpflug imaging) without tilting the image detector. Finally, we demonstrate how our camera can be used to realize nonplanar DOFs. We believe flexible DOF imaging can open a new creative dimension in photography and lead to new capabilities in scientific imaging, vision, and graphics.  相似文献   

16.
The classic approach to structure from motion entails a clear separation between motion estimation and structure estimation and between two-dimensional (2D) and three-dimensional (3D) information. For the recovery of the rigid transformation between different views only 2D image measurements are used. To have available enough information, most existing techniques are based on the intermediate computation of optical flow which, however, poses a problem at the locations of depth discontinuities. If we knew where depth discontinuities were, we could (using a multitude of approaches based on smoothness constraints) accurately estimate flow values for image patches corresponding to smooth scene patches; but to know the discontinuities requires solving the structure from motion problem first. This paper introduces a novel approach to structure from motion which addresses the processes of smoothing, 3D motion and structure estimation in a synergistic manner. It provides an algorithm for estimating the transformation between two views obtained by either a calibrated or uncalibrated camera. The results of the estimation are then utilized to perform a reconstruction of the scene from a short sequence of images.The technique is based on constraints on image derivatives which involve the 3D motion and shape of the scene, leading to a geometric and statistical estimation problem. The interaction between 3D motion and shape allows us to estimate the 3D motion while at the same time segmenting the scene. If we use a wrong 3D motion estimate to compute depth, we obtain a distorted version of the depth function. The distortion, however, is such that the worse the motion estimate, the more likely we are to obtain depth estimates that vary locally more than the correct ones. Since local variability of depth is due either to the existence of a discontinuity or to a wrong 3D motion estimate, being able to differentiate between these two cases provides the correct motion, which yields the least varying estimated depth as well as the image locations of scene discontinuities. We analyze the new constraints, show their relationship to the minimization of the epipolar constraint, and present experimental results using real image sequences that indicate the robustness of the method.  相似文献   

17.
目的 图像显著性检测方法对前景与背景颜色、纹理相似或背景杂乱的场景,存在背景难抑制、检测对象不完整、边缘模糊以及方块效应等问题。光场图像具有重聚焦能力,能提供聚焦度线索,有效区分图像前景和背景区域,从而提高显著性检测的精度。因此,提出一种基于聚焦度和传播机制的光场图像显著性检测方法。方法 使用高斯滤波器对焦堆栈图像的聚焦度信息进行衡量,确定前景图像和背景图像。利用背景图像的聚焦度信息和空间位置构建前/背景概率函数,并引导光场图像特征进行显著性检测,以提高显著图的准确率。另外,充分利用邻近超像素的空间一致性,采用基于K近邻法(K-nearest neighbor,K-NN)的图模型显著性传播机制进一步优化显著图,均匀地突出整个显著区域,从而得到更加精确的显著图。结果 在光场图像基准数据集上进行显著性检测实验,对比3种主流的传统光场图像显著性检测方法及两种深度学习方法,本文方法生成的显著图可以有效抑制背景区域,均匀地突出整个显著对象,边缘也更加清晰,更符合人眼视觉感知。查准率达到85.16%,高于对比方法,F度量(F-measure)和平均绝对误差(mean absolute error,MAE)分别为72.79%和13.49%,优于传统的光场图像显著性检测方法。结论 本文基于聚焦度和传播机制提出的光场图像显著性模型,在前/背景相似或杂乱背景的场景中可以均匀地突出显著区域,更好地抑制背景区域。  相似文献   

18.
3维全景图像技术是一种能够记录和显示全真3维场景的图像技术。该技术采用微透镜阵列记录空间场景,空间任意一点的深度信息只需通过一次成像即可直接获得。本文研究采用全景图像技术直接获取物体空间信息的方法。此方法首先从全景图像中抽提视图。视图是通过抽提全景图像中对应于每个微透镜下同一局部位置的点人工合成的。每幅视图包含了全景图像中对原来的物空间场景按照某一特定方向的平行投影记录信息。接下来通过分析全景图像的光学成像过程。推导了用来描述物体深度信息和其在对应的视图间的视差关系的深度方程。从而得出空间任一点的深度可以通过其在对应视图间的视差来求得。最后,通过运用全景图像测量火柴盒的厚度的实例,验证了这一方法的可行性。其结果一方面可用于全景图像的数据处理本身,另一方面可望为开发新型的深度测量工具提供理论依据。  相似文献   

19.
In order to calibrate cameras in an accurate manner, lens distortion models have to be included in the calibration procedure. Usually, the lens distortion models used in camera calibration depend on radial functions of image pixel coordinates. Such models are well-known, simple and can be estimated using just image information. However, these models do not take into account an important physical constraint of lens distortion phenomena, namely: the amount of lens distortion induced in an image point depends on the scene point depth with respect to the camera projection plane. In this paper we propose a new accurate depth dependent lens distortion model. To validate this approach, we apply the new lens distortion model to camera calibration in planar view scenarios (that is 3D scenarios where the objects of interest lie on a plane). We present promising experimental results on planar pattern images and on sport event scenarios. Nevertheless, although we emphasize the feasibility of the method for planar view scenarios, the proposed model is valid in general and can be used in any scenario where the point depth can be estimated.  相似文献   

20.
This paper presents an accurate saliency detection algorithm customized for 3D images which contain abundant depth cue. Firstly, depth feature is calculated based on the sharp regions’ positions within the focal stack. Then, we compute the coarse saliency map by subtracting the background region from the all-focus image according to the depth feature. Finally, we employ the contrast information in the coarse saliency map to obtain the final result. Experiments on light field dataset demonstrate that our approach favorably outperforms five state-of-the-art methods in terms of precision, recall and F-Measure. Moreover, the depth feature is validated to be a valuable complement to existing visual saliency analysis under the circumstance that the background regions are complex or similar to salient object regions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号