首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Surface normals can be computed from three images of a workpiece taken under three distinct lighting conditions without requiring surface continuity. Radiometric methods are susceptible to systematic errors such as: errors in the measurement of light source orientations; mismatched light source irradiance; detector nonlinearity; the presence of specular reflection or shadows; the spatial and spectral distribution of incident light; surface size, material, and microstructure; and the length and properties of the light source to target path. Typically, a 1° error in surface orientation of a Lambertian workpiece is caused by a 1 percent change in image intensity due to variations in incident light intensity or a 1° change in orientation of a collimated light source. Tests on a white nylon sphere indicate that by using modest error prevention and calibration schemes, surface angles off the camera axis can be computed within 5°, except at edge pixels. Equations for the sensitivity of surface normals to major error sources have been derived. Results of surface normal estimation and edge extraction experiments on various non-Lambertian and textured workpieces are also presented.  相似文献   

2.
Recovering multiple point light sources from a sparse set of photographs in which objects of unknown texture can move is challenging. This is because both diffuse and specular reflections appear to slide across surfaces, which is a well known physical fact. What is seldom demonstrated, however, is that it can be taken advantage of to address the light source recovery problem. In this paper, we therefore show that, if approximate 3D models of the moving objects are available or can be computed from the images, we can solve the problem without any a priori constraints on the number of sources, on their color, or on the surface albedos.Our approach involves finding local maxima in individual images, checking them for consistency across images, retaining the apparently specular ones, and having them vote in a Hough-like scheme for potential light source directions. The precise directions of the sources and their relative power are then obtained by optimizing a standard lighting model. As a byproduct we also obtain an estimate of various material parameters such as the unlighted texture and specular properties.We show that the resulting algorithm can operate in presence of arbitrary textures and an unknown number of light sources of possibly different unknown colors. We also estimate its accuracy using ground-truth data.  相似文献   

3.
We propose a variational algorithm to jointly estimate the shape, albedo, and light configuration of a Lambertian scene from a collection of images taken from different vantage points. Our work can be thought of as extending classical multi-view stereo to cases where point correspondence cannot be established, or extending classical shape from shading to the case of multiple views with unknown light sources. We show that a first naive formalization of this problem yields algorithms that are numerically unstable, no matter how close the initialization is to the true geometry. We then propose a computational scheme to overcome this problem, resulting in provably stable algorithms that converge to (local) minima of the cost functional. We develop a new model that explicitly enforces positivity in the light sources with the assumption that the object is Lambertian and its albedo is piecewise constant and show that the new model significantly improves the accuracy and robustness relative to existing approaches.  相似文献   

4.
We present a novel color multiplexing method for extracting depth edges in a scene. It has been shown that casting shadows from different light positions provides a simple yet robust cue for extracting depth edges. Instead of flashing a single light source at a time as in conventional methods, our method flashes all light sources simultaneously to reduce the number of captured images. We use a ring light source around a camera and arrange colors on the ring such that the colors form a hue circle. Since complementary colors are arranged at any position and its antipole on the ring, shadow regions where a half of the hue circle is occluded are colorized according to the orientations of depth edges, while non-shadow regions where all the hues are mixed have a neutral color in the captured image. Thus the colored shadows in the single image directly provide depth edges and their orientations in an ideal situation. We present an algorithm that extracts depth edges from a single image by analyzing the colored shadows. We also present a more robust depth edge extraction algorithm using an additional image captured by rotating the hue circle with \(180^\circ \) to compensate for scene textures and ambient lights. We compare our approach with conventional methods for various scenes using a camera prototype consisting of a standard camera and 8 color LEDs. We also demonstrate a bin-picking system using the camera prototype mounted on a robot arm.  相似文献   

5.
Several techniques have been developed for recovering reflectance properties of real surfaces under unknown illumination. However, in most cases, those techniques assume that the light sources are located at infinity, which cannot be applied safely to, for example, reflectance modeling of indoor environments. In this paper, we propose two types of methods to estimate the surface reflectance property of an object, as well as the position of a light source from a single view without the distant illumination assumption, thus relaxing the conditions in the previous methods. Given a real image and a 3D geometric model of an object with specular reflection as inputs, the first method estimates the light source position by fitting to the Lambertian diffuse component, while separating the specular and diffuse components by using an iterative relaxation scheme. Our second method extends that first method by using as input a specular component image, which is acquired by analyzing multiple polarization images taken from a single view, thus removing its constraints on the diffuse reflectance property. This method simultaneously recovers the reflectance properties and the light source positions by optimizing the linearity of a log-transformed Torrance-Sparrow model. By estimating the object's reflectance property and the light source position, we can freely generate synthetic images of the target object under arbitrary lighting conditions with not only source direction modification but also source-surface distance modification. Experimental results show the accuracy of our estimation framework.  相似文献   

6.
Acquiring linear subspaces for face recognition under variable lighting   总被引:9,自引:0,他引:9  
Previous work has demonstrated that the image variation of many objects (human faces in particular) under variable lighting can be effectively modeled by low-dimensional linear spaces, even when there are multiple light sources and shadowing. Basis images spanning this space are usually obtained in one of three ways: a large set of images of the object under different lighting conditions is acquired, and principal component analysis (PCA) is used to estimate a subspace. Alternatively, synthetic images are rendered from a 3D model (perhaps reconstructed from images) under point sources and, again, PCA is used to estimate a subspace. Finally, images rendered from a 3D model under diffuse lighting based on spherical harmonics are directly used as basis images. In this paper, we show how to arrange physical lighting so that the acquired images of each object can be directly used as the basis vectors of a low-dimensional linear space and that this subspace is close to those acquired by the other methods. More specifically, there exist configurations of k point light source directions, with k typically ranging from 5 to 9, such that, by taking k images of an object under these single sources, the resulting subspace is an effective representation for recognition under a wide range of lighting conditions. Since the subspace is generated directly from real images, potentially complex and/or brittle intermediate steps such as 3D reconstruction can be completely avoided; nor is it necessary to acquire large numbers of training images or to physically construct complex diffuse (harmonic) light fields. We validate the use of subspaces constructed in this fashion within the context of face recognition.  相似文献   

7.
In this paper, we present a complete framework for recovering an object shape, estimating its reflectance properties and light sources from a set of images. The whole process is performed automatically. We use the shape from silhouette approach proposed by R. Szeliski (1993) combined with image pixels for reconstructing a triangular mesh according to the marching cubes algorithm. A classification process identifies regions of the object having the same appearance. For each region, a single point or directional light source is detected. Therefore, we use specular lobes, lambertian regions of the surface or specular highlights seen on images. An identification method jointly (i) decides what light sources are actually significant and (ii) estimates diffuse and specular coefficients for a surface represented by the modified Phong model (Lewis, 1994). In order to validate our algorithm efficiency, we present a case study with various objects, light sources and surface properties. As shown in the results, our system proves accurate even for real objects images obtained with an inexpensive acquisition system.  相似文献   

8.
Many high‐level image processing tasks require an estimate of the positions, directions and relative intensities of the light sources that illuminated the depicted scene. In image‐based rendering, augmented reality and computer vision, such tasks include matching image contents based on illumination, inserting rendered synthetic objects into a natural image, intrinsic images, shape from shading and image relighting. Yet, accurate and robust illumination estimation, particularly from a single image, is a highly ill‐posed problem. In this paper, we present a new method to estimate the illumination in a single image as a combination of achromatic lights with their 3D directions and relative intensities. In contrast to previous methods, we base our azimuth angle estimation on curve fitting and recursive refinement of the number of light sources. Similarly, we present a novel surface normal approximation using an osculating arc for the estimation of zenith angles. By means of a new data set of ground‐truth data and images, we demonstrate that our approach produces more robust and accurate results, and show its versatility through novel applications such as image compositing and analysis.  相似文献   

9.
针对当前新兴的工艺简单的微小光致夹持器缺乏可控光源驱动与监控系统的现状,本文研制了一种基于PC机控制的光源驱动与监控系统.该系统借助显微成像CCD(charge-coupled device)观测手段,通过VC++平台控制STM32微控芯片下位机,输出可控的PWM(pulse width modulation)信号来控制光路驱动电路,从而改变光源光照时间及光照强度,来最终实现对夹持器夹持操作的驱动与控制.本系统克服了传统微小夹持器主流采用电或磁作为驱动能量源,而需要较复杂的机械传动机构将运动传递到执行器末端,从而带来的夹持器设计与制造复杂性等缺点.光源测试实验、夹持器开合实验及夹持器夹取鱼卵实验验证了该系统的有效性.  相似文献   

10.
Abstract— A new approach to resolution enhancement of an integral‐imaging (II) three‐dimensional display using multi‐directional elemental images is proposed. The proposed method uses a special lens made up of nine pieces of a single Fresnel lens which are collected from different parts of the same lens. This composite lens is placed in front of the lens array such that it generates nine sets of directional elemental images to the lens array. These elemental images are overlapped on the lens array and produce nine point light sources per each elemental lens at different positions in the focal plane of the lens array. Nine sets of elemental images are projected by a high‐speed digital micromirror device and are tilted by a two‐dimensional scanning mirror system, maintaining the time‐multiplexing sequence for nine pieces of the composite lens. In this method, the concentration of the point light sources in the focal plane of the lens array is nine‐times higher, i.e., the distance between two adjacent point light sources is three times smaller than that for a conventional II display; hence, the resolution of three‐dimensional image is enhanced.  相似文献   

11.
Image retrieval using multiple evidence ranking   总被引:5,自引:0,他引:5  
The World Wide Web is the largest publicly available image repository and a natural source of attention. An immediate consequence is that searching for images on the Web has become a current and important task. To search for images of interest, the most direct approach is keyword-based searching. However, since images on the Web are poorly labeled, direct application of standard keyword-based image searching techniques frequently yields poor results. We propose a comprehensive solution to this problem. In our approach, multiple sources of evidence related to the images are considered. To allow combining these distinct sources of evidence, we introduce an image retrieval model based on Bayesian belief networks. To evaluate our approach, we perform experiments on a reference collection composed of 54000 Web images. Our results indicate that retrieval using an image surrounding text passages is as effective as standard retrieval based on HTML tags. This is an interesting result because current image search engines in the Web usually do not take text passages into consideration. Most important, according to our results, the combination of information derived from text passages with information derived from HTML tags leads to improved retrieval, with relative gains in average precision figures of roughly 50 percent, when compared to the results obtained by the use of each source of evidence in isolation.  相似文献   

12.
Light painting is an artform where a light source is moved during a long‐exposure shot, creating trails resembling a stroke on a canvas. It is very difficult to perform because the light source needs to be moved at the intended speed and along a precise trajectory. Additionally, images can be corrupted by the person moving the light. We propose computational light painting, which avoids such artifacts and is easy to use. Taking a video of the moving light as input, a virtual exposure allows us to draw the intended light positions in a post‐process. We support animation, as well as 3D light sculpting, with high‐quality results.  相似文献   

13.
提出了从几何模型和多视点彩色图像构造真实感三维头部模型的方法.作为输入数据的彩色图像不仅被用来构造全景纹理图,而且是发型重构的数据源.首先在模型空间定义对应于真实图像平面的虚拟图像平面,恢复不同视点图像中发型外轮廓点的相应三维位置;然后根据发根位置、生长方向和长度,在模型的头皮表面重建发型.另一种发型重构的方法是从多幅图像中抽取的二维发型轮廓线出发,通过立体匹配法恢复其三维位置.然后用Coons-patch法重构发型表面.实验结果显示了文中方法的头部重建效果.  相似文献   

14.
吴晨  曹力  秦宇  吴苗苗  顾兆光 《图学学报》2022,43(6):1080-1087
伴随着生物学的发展与纳米电子器件仿真技术的进步,原子结构在现代化科技领域发挥至关重 要的作用。原子结构的复杂细节使得渲染效果受光源位置影响较大,导致了原子模型渲染工作的困难。基于此, 提出了一种基于参考图像的原子模型渲染方法,计算出参考图像的光照参数用于原子模型的渲染。首先,通过 改变光源位置,利用 POV-Ray 脚本实现不同光源角度下的批量模型渲染,采集光源位置参数及渲染图像得到 对应光源位置的渲染图像数据集;接着,以残差神经网络为主干设计光源估计网络,并在网络中嵌入注意力机 制提升网络准确性,使用优化后的光源估计网络对数据集进行训练,回归光源位置参数;最后将训练好的卷积 神经网络应用于参考图像的渲染参数估计中,利用渲染参数渲染目标模型。实验结果显示。通过网络预测的参 数与真实照明参数误差极小,具有高度可靠性。  相似文献   

15.
Most active optical range sensors record, simultaneously with the range image, the amount of light reflected at each measured surface location: this information forms what is called a range intensity image, also known as a reflectance image. This paper proposes a method that uses this type of image for the correction of the color information of a textured 3D model. This color information is usually obtained from color images acquired using a digital camera. The lighting condition for the color images are usually not controlled, thus this color information may not be accurate. On the other hand, the illumination condition for the range intensity image is known since it is obtained from a controlled lighting and observation configuration, as required for the purpose of active optical range measurement. The paper describes a method for combining the two sources of information, towards the goal of compensating for a reference range intensity image is first obtained by considering factors such as sensor properties, or distance and relative surface orientation of the measured surface. The color image of the corresponding surface portion is then corrected using this reference range intensity image. A B-spline interpolation technique is applied to reduce the noise of range intensity images. Finally, a method for the estimation of the illumination color is applied to compensate for the light source color. Experiments show the effectiveness of the correction method using range intensity images.  相似文献   

16.
This paper proposes a novel and general method of glare generation based on wave optics. A glare image is regarded as a result of Fraunhofer diffraction, which is equivalent to a 2D Fourier transform of the image of given apertures or obstacles. In conventional methods, the shapes of glare images are categorized according to their source apertures, such as pupils and eyelashes and their basic shapes (e.g. halos, coronas, or radial streaks) are manually generated as templates, mainly based on statistical observation. Realistic variations of these basic shapes often depend on the use of random numbers. Our proposed method computes glare images fully automatically from aperture images and can be applied universally to all kinds of apertures, including camera diaphragms. It can handle dynamic changes in the position of the aperture relative to the light source, which enables subtle movement or rotation of glare streaks. Spectra can also be simulated in the glare, since the intensity of diffraction depends on the wavelength of light. The resulting glare image is superimposed onto a given computer‐generated image containing high‐intensity light sources or reflections, aligning the center of the glare image to the high‐intensity areas. Our method is implemented as a multipass rendering software. By precomputing the dynamic glare image set and putting it into texture memory, the software runs at an interactive rate.  相似文献   

17.
We present a fuzzy-gain filter for target tracking in a stressful environment where a target may accelerate at nonuniform rates and may also complete sharp turns within a short time period. Furthermore, the target may be missing from successive scans even during the turns, and its positions may be detected erroneously. The proposed tracker incorporates fuzzy logic in a conventional α-β filter by the use of a set of fuzzy if-then rules. Given the error and change of error in the last prediction, these rules are used to determine the magnitude of α and β. The proposed tracker has the advantage that it does not require any assumption of statistical models of process and measurement noise and of target dynamics. Furthermore, it does not need a maneuver detector even when tracking maneuvering targets. The performance of the fuzzy tracker is evaluated using real radar tracking data generated from F-18 and other fighters, collected jointly by the defense departments of Canada and the United States. When compared against that of a conventional tracking algorithm based on a two-stage Kalman filter, its performance is found to be better both in terms of prediction accuracy and the ability to minimize the number of track losses  相似文献   

18.
This paper is about automatically reconstructing the full 3D surface of an object observed in motion by a single static camera. Based on the two paradigms, structure from motion and linear intensity subspaces, we introduce the geotensity constraint that governs the relationship between four or more images of a moving object. We show that it is possible in theory to solve for 3D Lambertian surface structure for the case of a single point light source and propose that a solution exists for an arbitrary number point light sources. The surface may or may not be textured. We then give an example of automatic surface reconstruction of a face under a point light source using arbitrary unknown object motion and a single fixed camera.  相似文献   

19.
基于数据逼近强约束的针图恢复算法是近年来提出的一种较为成功的从明暗恢复形状(shape from shading)的算法,但由于该算法在非垂直光线下得到的初始化针图的误差较大,并且不能保证法向量有解或有唯一解,为了解决SFS算法存在的问题,提出了一种改进的SFS算法。该改进算法从分析非垂直光线下图像梯度图与针图之间的关系入手,首先检测图像局部最亮点位置;然后根据照度方程估计表面局部最高点的位置,同时对梯度方向进行调整,并建立方程组;最后针对方程组解的不同情况,提出了相应的处理方法。改进后的算法,对于垂直光线和非垂直光线下的情况同样有效,从而扩大了基于数据逼近强约束的SFS算法的适用范围。从合成图像和实际图像的实验结果可以看出,采用改进的算法可以得到比基于数据逼近强约束的算法更接近真实表面的初始化针图和初始化高度。  相似文献   

20.
Helmholtz Stereopsis is a powerful technique for reconstruction of scenes with arbitrary reflectance properties. However, previous formulations have been limited to static objects due to the requirement to sequentially capture reciprocal image pairs (i.e. two images with the camera and light source positions mutually interchanged). In this paper, we propose colour Helmholtz Stereopsis—a novel framework for Helmholtz Stereopsis based on wavelength multiplexing. To address the new set of challenges introduced by multispectral data acquisition, the proposed Colour Helmholtz Stereopsis pipeline uniquely combines a tailored photometric calibration for multiple camera/light source pairs, a novel procedure for spatio-temporal surface chromaticity calibration and a state-of-the-art Bayesian formulation necessary for accurate reconstruction from a minimal number of reciprocal pairs. In this framework, reflectance is spatially unconstrained both in terms of its chromaticity and the directional component dependent on the illumination incidence and viewing angles. The proposed approach for the first time enables modelling of dynamic scenes with arbitrary unknown and spatially varying reflectance using a practical acquisition set-up consisting of a small number of cameras and light sources. Experimental results demonstrate the accuracy and flexibility of the technique on a variety of static and dynamic scenes with arbitrary unknown BRDF and chromaticity ranging from uniform to arbitrary and spatially varying.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号