首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 171 毫秒
1.
目的 光场相机可以通过一次拍摄,获取立体空间中的4D光场数据,渲染出焦点堆栈图像,然后采用聚焦性检测函数从中提取深度信息。然而,不同聚焦性检测函数响应特性不同,不能适应于所有的场景,且现有多数方法提取的深度信息散焦误差较大,鲁棒性较差。针对该问题,提出一种新的基于光场聚焦性检测函数的深度提取方法,获取高精度的深度信息。方法 设计加窗的梯度均方差聚焦性检测函数,提取焦点堆栈图像中的深度信息;利用全聚焦彩色图像和散焦函数标记图像中的散焦区域,使用邻域搜索算法修正散焦误差。最后利用马尔可夫随机场(MRF)将修正后的拉普拉斯算子提取的深度图与梯度均方差函数得到的深度图融合,得到高精确度的深度图像。结果 在Lytro数据集和自行采集的测试数据上,相比于其他先进的算法,本文方法提取的深度信息噪声较少。精确度平均提高约9.29%,均方误差平均降低约0.056。结论 本文方法提取的深度信息颗粒噪声更少;结合彩色信息引导,有效修正了散焦误差。对于平滑区域较多的场景,深度提取效果较好。  相似文献   

2.
Many applications require images with high resolution and an extended depth of field. Directly changing the depth of field in optical systems results in losing resolution and information from the captured scene. Different methods have been proposed for carrying out the task of extending the depth of field. Traditional techniques consist of optical-system manipulation by reducing the pupil aperture along with the image resolution. Other methods propose the use of optical arrays with computing-intensive digital post-processing for extending the depth of field. This work proposes a pre-processing optical system and a cost-effective post-processing digital treatment based on an optimized Kalman filter to extend the depth of field in images. Results demonstrate that the proposed pre-processing and post-processing techniques provide images with high resolution and extended depth of field for different focalization errors without requiring optical system calibration. In assessing the resulting image through the universal image quality index, this technique proves superior.  相似文献   

3.
由于光学系统受景深小的物理性能限制,对纵向变化范围较大的物体,所拍图片为局部清晰。利用Kolmogorov复杂性测度来度量不同聚焦层面形成的序列图像中对象的局部清晰程度,将序列图像聚焦清晰的部分提取并合并出一幅全清晰图像,实现景深扩展,得到清晰的全景深图像。实验证明,将复杂性测度应用于景深扩展,具有广泛的应用前景。  相似文献   

4.
This article summarizes ‘generalized response surface methodology’ (GRSM), extending Box and Wilson’s ‘response surface methodology’ (RSM). GRSM allows multiple random responses, selecting one response as goal and the other responses as constrained variables. Both GRSM and RSM estimate local gradients to search for the optimum. These gradients are based on local first-order polynomial approximations. GRSM combines these gradients with Mathematical Programming findings to estimate a better search direction than the steepest ascent direction used by RSM. Moreover, these gradients are used in a bootstrap procedure for testing whether the estimated solution is indeed optimal. The focus of this paper is the optimization of simulated (not real) systems.  相似文献   

5.
目的 对于微距摄影来说,由于微距镜头的景深有限,往往很难通过单幅照片获得拍摄对象全幅清晰的图像.因此要想获取全幅清晰的照片,就需要拍摄多幅具有不同焦点的微距照片,并对其进行融合.方法 传统的微距照片融合方法一般都假定需要融合的图像是已经配准好的,也并没有考虑微距图像的自动采集.因此提出了一种用于微距摄影的多聚焦图像采集和融合系统,该系统由3个部分组成.第1部分是一种微距图像拍摄装置,该硬件能够以高精度的方式拍摄物体在不同焦距下的微距照片.第2部分是一个基于不变特征的图像配准组件,它可以对在多个焦点下拍摄的微距图像进行自动配准和对齐.第3部分是一个基于图像金字塔的多聚焦图像融合组件,这个组件能够对已经对齐的微距照片进行融合,使得合成的图像具有更大的景深.该组件对基于图像金字塔的融合方法进行了扩展,提出了一种基于滤波的权重计算策略.通过将该权重计算与图像金字塔相结合,得到了一种基于多分辨率的多聚焦图像融合方法.结果 论文使用设计的拍摄装置采集了多组实验数据,用以验证系统硬件设计和软件设计的正确性,并使用主观和客观的方法对提出的系统进行评价.从主观评价来看,系统合成的微距图像不仅具有足够的景深,而且在高分辨率下也能够清晰地呈现物体微小的细节.从客观评价来看,通过将系统合成的微距图像与其他方法合成的微距图像进行量化比较,在标准差、信息熵和平均梯度3种评价标准中都是最优的.结论 实验结果表明,该系统是灵活和高效的,不仅能够对多幅具有不同焦点的微距图像进行自动采集、配准和融合,并且在图像融合的质量方面也能和其他方法相媲美.  相似文献   

6.
Motion segmentation using occlusions   总被引:4,自引:0,他引:4  
We examine the key role of occlusions in finding independently moving objects instantaneously in a video obtained by a moving camera with a restricted field of view. In this problem, the image motion is caused by the combined effect of camera motion (egomotion), structure (depth), and the independent motion of scene entities. For a camera with a restricted field of view undergoing a small motion between frames, there exists, in general, a set of 3D camera motions compatible with the observed flow field even if only a small amount of noise is present, leading to ambiguous 3D motion estimates. If separable sets of solutions exist, motion-based clustering can detect one category of moving objects. Even if a single inseparable set of solutions is found, we show that occlusion information can be used to find ordinal depth, which is critical in identifying a new class of moving objects. In order to find ordinal depth, occlusions must not only be known, but they must also be filled (grouped) with optical flow from neighboring regions. We present a novel algorithm for filling occlusions and deducing ordinal depth under general circumstances. Finally, we describe another category of moving objects which is detected using cardinal comparisons between structure from motion and structure estimates from another source (e.g., stereo).  相似文献   

7.
The growing popularity of social media in recent years has resulted in the creation of an enormous amount of user-generated content. A significant portion of this information is useful and has proven to be a great source of knowledge. However, since much of this information has been contributed by strangers with little or no apparent reputation to speak of, there is no easy way to detect whether the content is trustworthy. Search engines are the gateways to knowledge but search relevance cannot guarantee that the content in the search results is trustworthy. A casual observer might not be able to differentiate between trustworthy and untrustworthy content. This work is focused on the problem of quantifying the value of such shared content with respect to its trustworthiness. In particular, the focus is on shared health content as the negative impact of acting on untrustworthy content is high in this domain. Health content from two social media applications, Wikipedia and Daily Strength, is used for this study. Sociological notions of trust are used to motivate the search for a solution. A two-step unsupervised, feature-driven approach is proposed for this purpose: a feature identification step in which relevant information categories are specified and suitable features are identified, and a quantification step for which various unsupervised scoring models are proposed. Results indicate that this approach is effective and can be adapted to disparate social media applications with ease.  相似文献   

8.
Bitmask Soft Shadows   总被引:4,自引:0,他引:4  
Recently, several real-time soft shadow algorithms have been introduced which all compute a single shadow map and use its texels to obtain a discrete scene representation. The resulting micropatches are backprojected onto the light source and the light areas occluded by them get accumulated to estimate overall light occlusion. This approach ignores patch overlaps, however, which can lead to objectionable artifacts. In this paper, we propose to determine the visibility of the light source with a bit field where each bit tracks the visibility of a sample point on the light source. This approach not only avoids overlapping-related artifacts but offers a solution to the important occluder fusion problem. Hence, it also becomes possible to correctly incorporate information from multiple depth maps. In addition, a new interpretation of the shadow map data is suggested which often provides superior visual results. Finally, we show how the search area for potential occluders can be reduced substantially.  相似文献   

9.
Three-dimensional (3D) shape reconstruction is a fundamental problem in machine vision applications. Shape From Focus (SFF) is one of the passive optical methods for 3D shape recovery that uses degree of focus as a cue to estimate 3D shape. In this approach, usually a single focus measure operator is applied to measure the focus quality of each pixel in the image sequence. However, the applicability of a single focus measure is limited to estimate accurately the depth map for diverse type of real objects. To address this problem, we develop Optimal Composite Depth (OCD) function through genetic programming (GP) for accurate depth estimation. The OCD function is constructed by optimally combining the primary information extracted using one/or more focus measures. The genetically developed composite function is then used to compute the optimal depth map of objects. The performance of the developed nonlinear function is investigated using both the synthetic and the real world image sequences. Experimental results demonstrate that the proposed estimator is more useful in computing accurate depth maps as compared to the existing SFF methods. Moreover, it is found that the heterogeneous function is more effective than homogeneous function.  相似文献   

10.
Optical flow carries valuable information about the nature and depth of surfaces and the relative motion between observer and objects. In the extraction of this information, the focus of expansion plays a vital role. In contrast to the current approaches, this paper presents a method for the direct computation of the focus of expansion using an optimization approach. The optical flow can then be computed using the focus of expansion.  相似文献   

11.
The continuous advancement in the field of imaging sensor necessitates the development of an efficient image fusion technique. The multi-focal image fusion extracts the in-focus information from the source images to construct a single composite image with increased depth-of-field. Traditionally, the information in multi-focal images is divided into two categories: in-focus and out-of-focus data. Instead of using a binary focus map, in this work we calculate the degree of focus for each source image using fuzzy logic. The fused image is then generated based on the weighted sum of this information. An initial focus tri-state map is built for each input image by using spatial frequency and a proposed focus measure named as alternate sum modified Laplacian. For the cases where these measures indicate different source images to contain focused pixel or have equal strength, another focus measure based on sum of gradient is employed to calculate the degree of focus in a fuzzy inference system. Finally, the fused image is computed from the weights determined by the degree of focus map of each image. The proposed algorithm is designed to fuse two source images, whereas fusion of multiple input images can be performed by fusing a source image with the fusion output of the previous input group. The comparison of the proposed method with several transform and pixel domain based techniques are conducted in terms of both subjective visual assessment and objective quantitative evaluation. Experimental results demonstrate that our method can be competitive with or even outperforms the methods in comparison.  相似文献   

12.
Fusing images with different focuses using support vector machines   总被引:5,自引:0,他引:5  
Many vision-related processing tasks, such as edge detection, image segmentation and stereo matching, can be performed more easily when all objects in the scene are in good focus. However, in practice, this may not be always feasible as optical lenses, especially those with long focal lengths, only have a limited depth of field. One common approach to recover an everywhere-in-focus image is to use wavelet-based image fusion. First, several source images with different focuses of the same scene are taken and processed with the discrete wavelet transform (DWT). Among these wavelet decompositions, the wavelet coefficient with the largest magnitude is selected at each pixel location. Finally, the fused image can be recovered by performing the inverse DWT. In this paper, we improve this fusion procedure by applying the discrete wavelet frame transform (DWFT) and the support vector machines (SVM). Unlike DWT, DWFT yields a translation-invariant signal representation. Using features extracted from the DWFT coefficients, a SVM is trained to select the source image that has the best focus at each pixel location, and the corresponding DWFT coefficients are then incorporated into the composite wavelet representation. Experimental results show that the proposed method outperforms the traditional approach both visually and quantitatively.  相似文献   

13.
Analyzing the polarimetric properties of reflected light is a potential source of shape information. However, it is well-known that polarimetric information contains fundamental shape ambiguities, leading to an underconstrained problem of recovering 3D geometry. To address this problem, we use additional geometric information, from coarse depth maps, to constrain the shape information from polarization cues. Our main contribution is a framework that combines surface normals from polarization (hereafter polarization normals) with an aligned depth map. The additional geometric constraints are used to mitigate physics-based artifacts, such as azimuthal ambiguity, refractive distortion and fronto-parallel signal degradation. We believe our work may have practical implications for optical engineering, demonstrating a new option for state-of-the-art 3D reconstruction.  相似文献   

14.
目的 由于一些光学镜头聚焦范围的有限性,很难对同一场景中所有物体都清晰地成像在一幅图像中,而将同一场景中的多幅源图像进行融合可以得到一幅全景更加清晰的图像,为了增强融合图像的质量,提出了一种新的非下采样四元数剪切波变换(NSQST)的图像融合算法。方法 首先将源图像经过NSQST分解得到低频子带系数和高频子带系数;其次,对低频子带,提出了一种改进的稀疏表示(ISR)的融合规则;对于高频子带,提出一种改进的空间频率、边缘能量和局部区域相似匹配度相结合的融合规则;最后通过NSQST逆变换得到融合图像。结果 与其他5种融合方法进行对比,本文方法获得了较好的客观指标和视觉效果,其中与NSCT-SR算法相比,本文方法获得的4个客观指标分别提高了3.6%、2.9%、1.5%、5.2%,3.7%、3.2%、3.2%、3.0%和6.2%、3.8%、3.4%、8.6%。结论 通过多聚焦图像进行融合实验,实验结果表明该方法可进一步应用于目标识别、医学诊断等领域。  相似文献   

15.
Nowadays image processing and machine vision fields have become important research topics due to numerous applications in almost every field of science. Performance in these fields is critically dependent to the quality of input images. In most of the imaging devices, optical lenses are used to capture images from a particular scene. But due to the limited depth of field of optical lenses, objects in different distances from focal point will be captured with different sharpness and details. Thus, important details of the scene might be lost in some regions. Multi-focus image fusion is an effective technique to cope with this problem. The main challenge in multi-focus fusion is the selection of an appropriate focus measure. In this paper, we propose a novel focus measure based on the surface area of regions surrounded by intersection points of input source images. The potential of this measure to distinguish focused regions from the blurred ones is proved. In our fusion algorithm, intersection points of input images are calculated and then input images are segmented using these intersection points. After that, the surface area of each segment is considered as a measure to determine focused regions. Using this measure we obtain an initial selection map of fusion which is then refined by morphological modifications. To demonstrate the performance of the proposed method, we compare its results with several competing methods. The results show the effectiveness of our proposed method.  相似文献   

16.
In pinhole integral imaging (PII)-based hologram, each pinhole is regarded as a point light source to emit light rays in different directions to simulate light propagation from the reconstructed 3D information to the computer-generated hologram (CGH). The PII-based hologram presents at a working mode similar to resolution-priority integral imaging (RPII). The depth of field (DOF) is inevitably limited. In this paper, we use twice close-range capturing to enhance the DOF of this hologram. Twice close-range capturing can ensure that the virtual objects with different depths are not far from their respective image reference planes, and the detailed information of the back object also can be preserved. Besides, we use a simple ray-tracing method to solve the occlusion problems. Numerical and optical reconstruction verified that the proposed method can reconstruct clearer 3D images compared with the single capturing method. In addition, the proposed method can present a natural occlusion effect. Thus, the DOF is enhanced.  相似文献   

17.
The great flexibility of a view camera allows the acquisition of high quality images that would not be possible any other way. Bringing a given object into focus is however a long and tedious task, although the underlying optical laws are known. A fundamental parameter is the aperture of the lens entrance pupil because it directly affects the depth of field. The smaller the aperture, the larger the depth of field. However a too small aperture destroys the sharpness of the image because of diffraction on the pupil edges. Hence, the desired optimal configuration of the camera is such that the object is in focus with the greatest possible lens aperture. In this paper, we show that when the object is a convex polyhedron, an elegant solution to this problem can be found. It takes the form of a constrained optimization problem, for which theoretical and numerical results are given. The optimization algorithm has been implemented on the prototype of a robotised view camera.  相似文献   

18.
Time series of optical satellite images acquired at high spatial resolution is a potentially useful source of information for monitoring agricultural practices. However, the information extracted from this source is often hampered by missing acquisitions or uncertain radiometric values. This paper presents a novel approach that addresses this issue by combining time series of satellite images with information from crop growth modeling and expert knowledge. In a fuzzy framework, a decision support system that combines multi-source information was designed to automatically detect the sugarcane harvest at field scale. The formalism that we used deals with the imprecision of the data and the approximation of expert reasoning. System performances were analyzed using a time series of SPOT-5 images. Results obtained were in substantial agreement with ground truth data: overall accuracy reached 97.80% with stability values exceeding 89.21% for all decisions. The contribution of fuzzy sets to overall accuracy reached 15.08%. The approach outlined in this paper is very promising and could be very useful for other agricultural applications.  相似文献   

19.
Among the tasks in software testing, test data generation is particularly difficult and costly. In recent years, several approaches that use metaheuristic search techniques to automatically obtain the test inputs have been proposed. Although work in this field is very active, little attention has been paid to the selection of an appropriate search space. The present work describes an alternative to this issue. More precisely, two approaches which employ an Estimation of Distribution Algorithm as the metaheuristic technique are explained. In both cases, different regions are considered in the search for the test inputs. Moreover, to depart from a region near to the one containing the optimum, the definition of the initial search space incorporates static information extracted from the source code of the software under test. If this information is not enough to complete the definition, then a grid search method is used. According to the results of the experiments conducted, it is concluded that this is a promising option that can be used to enhance the test data generation process.  相似文献   

20.
王程  张骏  高隽 《中国图象图形学报》2020,25(12):2630-2646
目的 光场相机一次成像可以同时获取场景中光线的空间和角度信息,为深度估计提供了条件。然而,光场图像场景中出现高光现象使得深度估计变得困难。为了提高算法处理高光问题的可靠性,本文提出了一种基于光场图像多视角上下文信息的抗高光深度估计方法。方法 本文利用光场子孔径图像的多视角特性,创建多视角输入支路,获取不同视角下图像的特征信息;利用空洞卷积增大网络感受野,获取更大范围的图像上下文信息,通过同一深度平面未发生高光的区域的深度信息,进而恢复高光区域深度信息。同时,本文设计了一种新型的多尺度特征融合方法,串联多膨胀率空洞卷积特征与多卷积核普通卷积特征,进一步提高了估计结果的精度和平滑度。结果 实验在3个数据集上与最新的4种方法进行了比较。实验结果表明,本文方法整体深度估计性能较好,在4D light field benchmark合成数据集上,相比于性能第2的模型,均方误差(mean square error,MSE)降低了20.24%,坏像素率(bad pixel,BP)降低了2.62%,峰值信噪比(peak signal-to-noise ratio,PSNR)提高了4.96%。同时,通过对CVIA (computer vision and image analysis) Konstanz specular dataset合成数据集和Lytro Illum拍摄的真实场景数据集的定性分析,验证了本文算法的有效性和可靠性。消融实验结果表明多尺度特征融合方法改善了深度估计在高光区域的效果。结论 本文提出的深度估计模型能够有效估计图像深度信息。特别地,高光区域深度信息恢复精度高、物体边缘区域平滑,能够较好地保存图像细节信息。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号