首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Joint blind restoration and surface recovery in photometric stereo   总被引:1,自引:0,他引:1  
We address the problem of simultaneous estimation of scene structure and restoration of images from blurred photometric measurements. In photometric stereo, the structure of an object is determined by using a particular reflectance model (the image irradiance equation) without considering the blurring effect. What we show is that, given arbitrarily blurred observations of a static scene captured with a stationary camera under different illuminant directions, we still can obtain the structure represented by the surface gradients and the albedo and also perform a blind image restoration. The surface gradients and the albedo are modeled as separate Markov random fields, and a suitable regularization scheme is used to estimate the different fields as well as the blur parameter. The results of the experimentations are illustrated with real as well as synthetic images.  相似文献   

2.
采用立体视觉空间曲面重建技术对三维曲面表面成像进行边缘提取、图像匹配、匹配点空间位置计算等步骤,得到三维曲面表面点的空间位置,利用空间点信息对三维曲面形状进行重建,恢复曲面三维形状;并讨论了立体视觉系统的摄像机分辨率、测量范围和摄像机间距等参数之间的关系.利用该方法对堆积物表面形状及体积进行测量实验结果表明,该方法能准确、快速、方便地给出三维曲面的形状.  相似文献   

3.
Correspondence     
Abstract

According to the Nyquist sampling theorem, a large number of sampled images and small intervals between capturing cameras should be met for rendering high quality virtual views without aliasing, which is hard to realize in practice. Therefore, achieving a balance between multi-view data capturing and quality of the rendered view remains as open problems. To solve this problem, we analysed the spectral bounds of the scene and designed a reconstruction filter. A proper number for rendering and a three-dimensional surface describing the relation between multi-view data capturing and quality of the rendered view were derived. Experimental results for both the modelled scene and the real scene show that only about 20% of sample images are needed compared with Nyquist sampling, while the quality of the rendered view remains higher than that of a Nyquist sampled comparison.  相似文献   

4.
We propose a motion estimation system that uses stereo image pairs as the input data. To perform experimental work, we also obtain a sequence of outdoor stereo images taken by two metric cameras. The system consists of four main stages, which are (1) determination of point correspondences on the stereo images, (2) correction of distortions in image coordinates, (3) derivation of 3D point coordinates from 2D correspondences, and (4) estimation of motion parameters based on 3D point correspondences. For the first stage of the system, we use a four-way matching algorithm to obtain matched point on two stereo image pairs at two consecutive time instants (ti and ti + 1). Since the input data are stereo images taken by cameras, it has two types of distortions, which are (i) film distortion and (ii) lens distortion. These two distortions must be corrected before any process can be applied on the matched points. To accomplish this goal, we use (i) bilinear transform for film distortion correction and (ii) lens formulas for lens distortion correction. After correcting the distortions, the results are 2D coordinates of each matched point that can be used to derive 3D coordinates. However, due to data noise, the calculated 3D coordinates to not usually represent a consistent rigid structure that is suitable for motion estimation; therefore, we suggest a procedure to select good 3D point sets as the input for motion estimation. The procedure exploits two constraints, rigidity between different time instants and uniform point distribution across the object on the image. For the last stage, we use an algorithm to estimate the motion parameters. We also wish to know what is the effect of quantization error on the estimated results; therefore an error analysis based on quantization error is performed on the estimated motion parameters. In order to test our system, eight sets of stereo image pairs are extracted from an outdoor stereo image sequence and used as the input data. The experimental results indicate that the proposed system does provide reasonable estimated motion parameters.  相似文献   

5.
A stereo image matching method is developed for the solution of the problem of reconstruction of surfaces with corrosion defects. Special attention is given to the problem of improvement of mathematical models of an objective function and methods for its optimization in the presence of shaded regions in the images of a metal surface, which are observed in the investigation of pit-like defects. The proposed method is used for the quantitative analysis of the depth and shape of local pitting corrosion.  相似文献   

6.
Low dynamic range (LDR) images captured by consumer cameras have a limited luminance range. As the conventional method for generating high dynamic range (HDR) images involves merging multiple-exposure LDR images of the same scene (assuming a stationary scene), we introduce a learning-based model for single-image HDR reconstruction. An input LDR image is sequentially segmented into the local region maps based on the cumulative histogram of the input brightness distribution. Using the local region maps, SParam-Net estimates the parameters of an inverse tone mapping function to generate a pseudo-HDR image. We process the segmented region maps as the input sequences on long short-term memory. Finally, a fast super-resolution convolutional neural network is used for HDR image reconstruction. The proposed method was trained and tested on datasets including HDR-Real, LDR-HDR-pair, and HDR-Eye. The experimental results revealed that HDR images can be generated more reliably than using contemporary end-to-end approaches.  相似文献   

7.
This paper presents an automatic alignment procedure for a four-source photometric stereo (PS) technique for reconstructing the depth map in the scanning electron microscope (SEM). PS, which is based on the so-called reflectance map, used several images of a surface to estimate the surface depth at each image point, in which the Lambertian reflectivity function is the simplest. In the SEM, the backscattered electron emission, which is one of the most important signals, is nearly Lambertian, and to simplify matters, SEM images are intrinsically grayscale maps. The possibility of having electron-PS at the SEM is assumed, taking advantage of one of the most exciting features of the technique, which returns true numerical 3-D models instead of some depth illusion from ordinary pictures.  相似文献   

8.
Depth from defocus involves estimating the relative blur between a pair of defocused images of a scene captured with different lens settings. When a priori information about the scene is available, it is possible to estimate the depth even from a single image. However, experimental studies indicate that the depth estimate improves with multiple observations. We provide a mathematical underpinning to this evidence by deriving and comparing the theoretical bounds for the error in the estimate of blur corresponding to the case of a single image and for a pair of defocused images. A new theorem is proposed that proves that the Cramér-Rao bound on the variance of the error in the estimate of blur decreases with an increase in the number of observations. The difference in the bounds turns out to be a function of the relative blurring between the observations. Hence one can indeed get better estimates of depth from multiple defocused images compared with those using only a single image, provided that these images are differently blurred. Results on synthetic as well as real data are given to further validate the claim.  相似文献   

9.
We propose a 3D video system that uses environmental stereo cameras to display a target object from an arbitrary viewpoint. This system is composed of the following stages: image acquisition, foreground segmentation, depth field estimation, 3D modeling from depth and shape information, and arbitrary view rendering. To create 3D models from captured 2D image pairs, a real‐time segmentation algorithm, a fast depth reconstruction algorithm, and a simple and efficient shape reconstruction method were developed. For viewpoint generation, the 3D surface model is rotated toward the desired place and orientation, and the texture data extracted from the original camera is projected onto this surface. Finally, a real‐time system that demonstrates the use of the aforementioned algorithms was implemented. The generated 3D object can easily be manipulated, e.g., rotated or translated, to render images from different viewpoints. This provides stable scenes of a minimal area that made it possible to understand the target space, and also made it easier for viewers to understand in near real‐time. © 2008 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 17, 367–378, 2007  相似文献   

10.
ABSTRACT

In this paper, a stereo calibration method for binocular ultra-wide angle long-wave infrared camera is proposed on the basis of an equivalent small field of view camera. Extrinsic parameters are calibrated through the corrected images from the left and right cameras. They can be viewed as images taken by a small field of view camera. The calibration procedure consists of three steps: monocular calibration, distortion correction and extrinsic parameters calibration. In order to evaluate the accuracy of the method, stereo vision of the camera is modelled and a 3D reconstruction approach is presented. A series of experiments, including intrinsic parameters, extrinsic parameters and 3D reconstruction, are conducted to validate the proposed method. The results show that the baseline length error decreases to 0.67%, and the relative error for the 3D reconstruction of corners is smaller than 8.11%. In contrast to the common stereo calibration method, it improves calibration accuracy.  相似文献   

11.
In this paper we present parallel implementations of two vision tasks; stereo matching and image matching. Linear features are used as matching primitives. These implementations are performed on a fixed size mesh array and achieve processor-time optimal performance. For stereo matching, we proposeO(Nn 3/P 2) time algorithm on aP ×P processor mesh array, whereN is the number of line segments in one image,n is the number of line segments in a window determined by the object size, andPn. The sequential algorithm takesO(Nn 3) time. For image matching, a partitioned parallel implementation is developed.O[((nm/P 2) +P)nm] time performance is achieved on aP ×P processor mesh array, whereP 2nm. This leads to a processor-time optimal solution forP ⩽ (nm)1/3. This research was supported in part bynsf under grantiri-9145810 and in part bydarpa andafosr contracts F-49260-89-C-0126 and F-49620-90-C-0078.  相似文献   

12.
Stereo vision process involves capturing the pictures from a camera of the same scene from at least two different locations and calculating the three-dimensional information. Conventionally, these two versions of snapshots are called left and right views which yield the depth information of an object upon relative comparison of its location in two views. Although the stereo image and its applications are becoming increasingly prevalent, there has been very limited research on disparity estimation from stereo images. Most of the existing techniques suffer from the gradient reversal artefacts issue. Therefore, to handle this issue, we have proposed a hybrid-guided image filter for improving the disparity estimation from stereo images. The hybrid filter utilizes the features of guided image filter and Bayesian non-local means with edge aware constraint. Maximum likelihood and local area homogeneity analysis are used to generate the guidance image for the proposed filter. To enhance the quality of disparity estimation from stereo images, segmentation is also done using the modified mean shift technique. Experimental results show that the proposed technique can efficiently estimate the depth maps over the available techniques. One-way ANOVA analysis on experimental results validates that the hybrid filter-based stereo matching outperforms consistently over the state-of-art approaches.  相似文献   

13.
The binocular stereo vision system is often used to reconstruct 3D point clouds of an object. However, it is challenging to find effective matching points in two object images with similar color or less texture. This will lead to mismatching by using the stereo matching algorithm to calculate the disparity map. In this context, the object can’t be reconstructed precisely. As a countermeasure, this study proposes to combine the Gray code fringe projection with the binocular camera as well as to generate denser point clouds by projecting an active light source to increase the texture of the object, which greatly reduces the reconstruction error caused by the lack of texture. Due to the limitation of the camera viewing angle, a one-perspective binocular camera can only reconstruct the 2.5D model of an object. To obtain the 3D model of an object, point clouds obtained from multiple-view images are processed by coarse registration using the coarse SAC-IA algorithm and fine registration using the ICP algorithm, which is followed by voxel filtering fusion of the point cloud. To improve the reconstruction quality, a polarizer is mounted in front of the cameras to filter out the redundant reflected light. Eventually, the 3D model and the dimension of a vase are obtained after calibration.  相似文献   

14.
主动多基线立体视觉及其在机器人焊接技术中的应用研究   总被引:3,自引:0,他引:3  
介绍了多基线立体视觉的基本原理针对焊接工件等一类表面缺纹纹理物体的三维深度恢复这一难题,提出采用基于条纹光照明方式的主动多基立体视觉方法解决这类物体的三维视觉建模问题,并通过验证明方法的有效性和可靠性。  相似文献   

15.
Image rectification for stereoscopic visualization   总被引:1,自引:0,他引:1  
This paper proposes an approach to rectifying two images of the same scene captured by cameras at general positions so that the results form a stereo pair that satisfies the constraints of the stereoscopic visualization platforms. This is unlike conventional image rectification research that primarily focuses on making stereo matching easier but pays little attention to 3D viewing. The novel derivation of the rectification algorithm also has an intuitive physical meaning that is not available from conventional approaches. Practical issues related to wide-baseline rectification and operation range of the proposed method are analyzed. Both simulated and real data experiments are used to assess the performance of the proposed algorithm.  相似文献   

16.
Thin-plate spline interpolation is used to interpolate the chromaticity of the color of the incident scene illumination across a training set of images. Given the image of a scene under unknown illumination, the chromaticity of the scene illumination can be found from the interpolated function. The resulting illumination-estimation method can be used to provide color constancy under changing illumination conditions and automatic white balancing for digital cameras. A thin-plate spline interpolates over a nonuniformly sampled input space, which in this case is a training set of image thumbnails and associated illumination chromaticities. To reduce the size of the training set, incremental k medians are applied. Tests on real images demonstrate that the thin-plate spline method can estimate the color of the incident illumination quite accurately, and the proposed training set pruning significantly decreases the computation.  相似文献   

17.
Park JH  Jung S  Choi H  Kim Y  Lee B 《Applied optics》2004,43(25):4882-4895
Stereo matching, a technique for acquiring depth information from many planar images obtained by several cameras, was developed several decades ago. Recently a novel depth-extraction technique that uses a lens array instead of several cameras has attracted much attention because of the advantages offered by its compact system configuration. We present a novel depth-extraction method that uses a lens array consisting of vertically long rectangular lens elements. The proposed method rearranges the horizontal positions of the pixels from the collection of perspective images while it leaves the vertical positions of the pixels unchanged. To these rearranged images we apply a correlation-based multibaseline stereo algorithm in properly modified form. The main feature of the proposed method is the inverse dependency of the disparity in depth between horizontal and vertical directions. This inverse dependency permits the extraction of exact depth from extremely periodically patterned object scenes and reduces quantization error in the depth extraction.  相似文献   

18.
Frauel Y  Javidi B 《Applied optics》2002,41(26):5488-5496
We use integral images of a three-dimensional (3D) scene to estimate the longitudinal depth of multiple objects present in the scene. With this information, we digitally reconstruct the objects in three dimensions and compute 3D correlations of input objects. We investigate the use of nonlinear techniques for 3D correlations. We present experimental results for 3D reconstruction and correlation of 3D objects. We demonstrate that it is possible to perform 3D segmentation of 3D objects in a scene. We finally present experiments to demonstrate that the 3D correlation is more discriminant than the two-dimensional correlation.  相似文献   

19.
为研究基于单幅二维图像不标定欧氏重构三维场景的理论,解决被动视觉系统中难以解决的特征点匹配问题,文章介绍一种有效的采用伪随机编码结构光照明主动视觉技术。利用伪随机序列的窗口特性,使被编码结构光照明的场景表面每一个特征点都具有唯一的代码,可以唯一地被辨识。实验数据和重构结果验证了该编码的可行性和有效性。  相似文献   

20.
为研究基于单幅二维图像不标定欧氏重构三维场景的理论,采用一种有效的采用伪随机编码结构光照明主动视觉技术。利用伪随机序列的窗口特性,使被编码结构光照明的场景表面每一个特征点都具有唯一的代码,可以唯一地被辨识。利用神经网络进行图像识别,使该编码结构光主动视觉系统可比较容易地解决被动视觉系统中难以解决的特征点匹配问题,实验效果令人满意。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号