首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This article focuses on real‐time image correction techniques that enable projector‐camera systems to display images onto screens that are not optimized for projections, such as geometrically complex, coloured and textured surfaces. It reviews hardware‐accelerated methods like pixel‐precise geometric warping, radiometric compensation, multi‐focal projection and the correction of general light modulation effects. Online and offline calibration as well as invisible coding methods are explained. Novel attempts in super‐resolution, high‐dynamic range and high‐speed projection are discussed. These techniques open a variety of new applications for projection displays. Some of them will also be presented in this report.  相似文献   

2.
现有关于光度补偿的研究主要基于单投影,可在带有纹理和颜色的表面上进行投影显示。而在多投影系统中,由于多投影带来了重叠区域的光度不一致性,相机的每个像素覆盖的区域增大等问题,容易致使光度补偿失败。介绍一种基于相机的多投影光度补偿系统,虽然也是通过获取环境参数的手段,但提出一种方便快捷的迭代方式来提高投影表面反射率参数的获取精度,可以达到更好的效果,并仅需对投影区域捕捉两幅以上图像。同时,对多投影仪重叠区域的融合效果也做了改进,统一解决了光度补偿和投影拼接问题。最后,提供的实验结果表明有较好的改进效果。  相似文献   

3.
We propose a novel framework to generate a global texture atlas for a deforming geometry. Our approach distinguishes from prior arts in two aspects. First, instead of generating a texture map for each timestamp to color a dynamic scene, our framework reconstructs a global texture atlas that can be consistently mapped to a deforming object. Second, our approach is based on a single RGB‐D camera, without the need of a multiple‐camera setup surrounding a scene. In our framework, the input is a 3D template model with an RGB‐D image sequence, and geometric warping fields are found using a state‐of‐the‐art non‐rigid registration method [GXW*15] to align the template mesh to noisy and incomplete input depth images. With these warping fields, our multi‐scale approach for texture coordinate optimization generates a sharp and clear texture atlas that is consistent with multiple color observations over time. Our approach is accelerated by graphical hardware and provides a handy configuration to capture a dynamic geometry along with a clean texture atlas. We demonstrate our approach with practical scenarios, particularly human performance capture. We also show that our approach is resilient on misalignment issues caused by imperfect estimation of warping fields and inaccurate camera parameters.  相似文献   

4.
In this paper we study the problem of "visual echo" in a full-duplex projector-camera system for tele-collaboration applications. Visual echo is defined as the appearance of projected contents observed by the camera. It can potentially saturate the projected contents, similar to audio echo in telephone conversation. Our approach to visual echo cancelation includes an off-line calibration procedure that records the geometric and photometric transfer between the projector and the camera in a look-up table. During run-time, projected contents in the captured video are identified using the calibration information and suppressed, therefore achieving the goal of canceling visual echo. Our approach can accurately handle full color images under arbitrary reflectance of display surfaces and photometric response of the projector or camera. It is robust to geometric registration errors and quantization effect, therefore particularly effective for high-frequency contents such as texts and hand drawings. We demonstrate the effectiveness of our approach with a variety of real images in a full-duplex projector-camera system.  相似文献   

5.
Standard camera and projector calibration techniques use a checkerboard that is manually shown at different poses to determine the calibration parameters. Furthermore, when image geometric correction must be performed on a three‐dimensional (3D) surface, such as projection mapping, the surface geometry must be determined. Camera calibration and 3D surface estimation can be costly, error prone, and time‐consuming when performed manually. To address this issue, we use an auto‐calibration technique that projects a series of Gray code structured light patterns. These patterns are captured by the camera to build a dense pixel correspondence between the projector and camera, which are used to calibrate the stereo system using an objective function, which embeds the calibration parameters together with the undistorted points. Minimization is carried out by a greedy algorithm that minimizes the cost at each iteration with respect to both calibration parameters and noisy image points. We test the auto‐calibration on different scenes and show that the results closely match a manual calibration of the system. We show that this technique can be used to build a 3D model of the scene, which in turn with the dense pixel correspondence can be used for geometric screen correction on any arbitrary surface.  相似文献   

6.
Video projectors are designed to project onto flat white diffuse screens. Over the last few years, projector-based systems have been used, in virtual reality applications, to light non-specific environments such as the walls of a room. However, in these situations, the images seen by the user are affected by several radiometric disturbances, such as interreflection. Radiometric compensation methods have been proposed to reduce the disturbance caused by interreflection, but nothing has been proposed for evaluating the phenomenon itself and the effectiveness of compensation methods. In this paper, we propose a radiosity-based method to simulate light transfer in immersive environments, from a projector to a camera (the camera gives the image a user would see in a real room). This enables us to evaluate the disturbances resulting from interreflection. We also consider the effectiveness of interreflection compensation and study the influence of several parameters (projected image, projection onto a small or large part of the room, reflectivity of the walls). Our results show that radiometric compensation can reduce the influence of interreflection but is severely limited if we project onto a large part of the walls around the user, or if all the walls are bright.  相似文献   

7.
Spectral reflectance is an intrinsic characteristic of objects that is independent of illumination and the used imaging sensors. This direct representation of objects is useful for various computer vision tasks, such as color constancy and material discrimination. In this work, we present a novel system for spectral reflectance recovery with high temporal resolution by exploiting the unique color-forming mechanism of digital light processing (DLP) projectors. DLP projectors use color wheels, which are composed of a number of color segments and rotate quickly to produce the desired colors. Making effective use of this mechanism, we show that a DLP projector can be used as a light source with spectrally distinct illuminations when the appearance of a scene under the projector’s irradiation is captured with a high-speed camera. Based on the measurements, the spectral reflectance of scene points can be recovered using a linear approximation of the surface reflectance. Our imaging system is built from off-the-shelf devices, and is capable of taking multi-spectral measurements as fast as 100 Hz. We carefully evaluated the accuracy of our system and demonstrated its effectiveness by spectral relighting of static as well as dynamic scenes containing different objects.  相似文献   

8.
Abstract— A major issue when setting up multi‐projector tiled displays is the spatial non‐uniformity of the color throughout the display's area. Indeed, the chromatic properties do not only vary between two different projectors, but also between different spatial locations inside the displaying area of one single projector. A new method for calibrating the colors of a tiled display is presented. An iterative algorithm to construct a correction table which makes the luminance uniform over the projected area of one single projector is presented first. This so‐called intra‐projector calibration uses a standard camera as a luminance measuring device and can be processed in parallel for all projectors. Once the color inside each projector is spatially uniform, the set of displayable colors — the color gamut — of each projector is measured. On the basis of these measurements, the goal of the inter‐projector calibration is to find an optimal gamut shared by all the projectors. Finding the optimal color gamut displayable by n projectors in time O(n) is shown, and the color conversion from one specific color gamut to the common global gamut is derived. The method of testing it on a tiled display consisting of 48 projectors with large chrominance shifts was experimentally validated.  相似文献   

9.
Spectral Monte‐Carlo methods are currently the most powerful techniques for simulating light transport with wavelength‐dependent phenomena (e.g., dispersion, colored particle scattering, or diffraction gratings). Compared to trichromatic rendering, sampling the spectral domain requires significantly more samples for noise‐free images. Inspired by gradient‐domain rendering, which estimates image gradients, we propose spectral gradient sampling to estimate the gradients of the spectral distribution inside a pixel. These gradients can be sampled with a significantly lower variance by carefully correlating the path samples of a pixel in the spectral domain, and we introduce a mapping function that shifts paths with wavelength‐dependent interactions. We compute the result of each pixel by integrating the estimated gradients over the spectral domain using a one‐dimensional screened Poisson reconstruction. Our method improves convergence and reduces chromatic noise from spectral sampling, as demonstrated by our implementation within a conventional path tracer.  相似文献   

10.
In this paper, we describe a novel multifocal projection concept that applies conventional video projectors and camera feedback. Multiple projectors with differently adjusted focal planes, but overlapping image areas are used. They can be either differently positioned in the environment or can be integrated into a single projection unit. The defocus created on an arbitrary surface is estimated automatically for each projector pixel. If this is known, a final image with minimal defocus can be composed in real-time from individual pixel contributions of all projectors. Our technique is independent of the surfaces' geometry, color and texture, the environment light, as well as of the projectors' position, orientation, luminance, and chrominance.  相似文献   

11.
In projector‐camera systems, object recognition is essential to enable users to interact with physical objects. Among several input features used by the object classifier, color information is widely used as it is easily obtainable. However, the color of an object seen by the camera changes due to the projected light from the projector, which degrades the recognition performance. To solve this problem, we propose a method to restore the original color of an object from the observed color through camera. The color refinement method has been developed based on the deep neural network. The inputs to the neural network are the color of the projector light as well as the observed color of the object in multiple color spaces, including RGB, HSV, HIS, and HSL. The neural network is trained in a supervised manner. Through a number of experiments, we show that our refinement method reduces the difference from the original color and improves the object recognition rate implemented with a number of classification methods.  相似文献   

12.
We generalize N‐rooks, jittered, and (correlated) multi‐jittered sampling to higher dimensions by importing and improving upon a class of techniques called orthogonal arrays from the statistics literature. Renderers typically combine or “pad” a collection of lower‐dimensional (e.g. 2D and 1D) stratified patterns to form higher‐dimensional samples for integration. This maintains stratification in the original dimension pairs, but looses it for all other dimension pairs. For truly multi‐dimensional integrands like those in rendering, this increases variance and deteriorates its rate of convergence to that of pure random sampling. Care must therefore be taken to assign the primary dimension pairs to the dimensions with most integrand variation, but this complicates implementations. We tackle this problem by developing a collection of practical, in‐place multi‐dimensional sample generation routines that stratify points on all t‐dimensional and 1‐dimensional projections simultaneously. For instance, when t=2, any 2D projection of our samples is a (correlated) multi‐jittered point set. This property not only reduces variance, but also simplifies implementations since sample dimensions can now be assigned to integrand dimensions arbitrarily while maintaining the same level of stratification. Our techniques reduce variance compared to traditional 2D padding approaches like PBRT's (0,2) and Stratified samplers, and provide quality nearly equal to state‐of‐the‐art QMC samplers like Sobol and Halton while avoiding their structured artifacts as commonly seen when using a single sample set to cover an entire image. While in this work we focus on constructing finite sampling point sets, we also discuss potential avenues for extending our work to progressive sequences (more suitable for incremental rendering) in the future.  相似文献   

13.
An appearance model for materials adhered with massive collections of special effect pigments has to take both high‐frequency spatial details (e.g., glints) and wave‐optical effects (e.g., iridescence) due to thin‐film interference into account. However, either phenomenon is challenging to characterize and simulate in a physically accurate way. Capturing these fascinating effects in a unified framework is even harder as the normal distribution function and the reflectance term are highly correlated and cannot be treated separately. In this paper, we propose a multi‐scale BRDF model for reproducing the main visual effects generated by the discrete assembly of special effect pigments, enabling a smooth transition from fine‐scale surface details to large‐scale iridescent patterns. We demonstrate that the wavelength‐dependent reflectance inside the pixel's footprint follows a Gaussian distribution according to the central limit theorem, and is closely related to the distribution of the thin‐film's thickness. We efficiently determine the mean and the variance of this Gaussian distribution for each pixel whose closed‐form expressions can be derived by assuming that the thin‐film's thickness is uniformly distributed. To validate its effectiveness, the proposed model is compared against some previous methods and photographs of actual materials. Furthermore, since our method does not require any scene‐dependent precomputation, the distribution of thickness is allowed to be spatially‐varying.  相似文献   

14.
Generating photo‐realistic images through Monte Carlo rendering requires efficient representation of light–surface interaction and techniques for importance sampling. Various models with good representation abilities have been developed but only a few of them have their importance sampling procedure. In this paper, we propose a method which provides a good bidirectional reflectance distribution function (BRDF) representation and efficient importance sampling procedure. Our method is based on representing BRDF as a function of tensor products. Four‐dimensional measured BRDF tensor data are factorized using Tucker decomposition. A large data set is used for comparing the proposed BRDF model with a number of well‐known BRDF models. It is shown that the underlying model provides good approximation to BRDFs.  相似文献   

15.
Visualization of 4D Vector Field Topology   总被引:1,自引:0,他引:1       下载免费PDF全文
In this paper, we present an approach to the topological analysis of four‐dimensional vector fields. In analogy to traditional 2D and 3D vector field topology, we provide a classification and visual representation of critical points, together with a technique for extracting their invariant manifolds. For effective exploration of the resulting four‐dimensional structures, we present a 4D camera that provides concise representation by exploiting projection degeneracies, and a 4D clipping approach that avoids self‐intersection in the 3D projection. We exemplify the properties and the utility of our approach using specific synthetic cases.  相似文献   

16.
In this paper we present a novel technique for easily calibrating multiple casually aligned projectors on spherical domes using a single uncalibrated camera. Using the prior knowledge of the display surface being a dome, we can estimate the camera intrinsic and extrinsic parameters and the projector to display surface correspondences automatically using a set of images. These images include the image of the dome itself and a projected pattern from each projector. Using these correspondences we can register images from the multiple projectors on the dome. Further, we can register displays which are not entirely visible in a single camera view using multiple pan and tilted views of an uncalibrated camera making our method suitable for displays of different size and resolution. We can register images from any arbitrary viewpoint making it appropriate for a single head‐tracked user in a 3D visualization system. Also, we can use several cartographic mapping techniques to register images in a manner that is appropriate for multi‐user visualization. Domes are known to produce a tremendous sense of immersion and presence in visualization systems. Yet, till date, there exists no easy way to register multiple projectors on a dome to create a high‐resolution realistic visualizations. To the best of our knowledge, this is the first method that can achieve accurate geometric registration of multiple projectors on a dome simply and automatically using a single uncalibrated camera.  相似文献   

17.
Abstract— Spectral color reproduction overcomes some inherent problems of colorimetric reproduction. An implementation of a spectral display for surface color reproduction, capable of reproducing a desired spectrum for each pixel, based on multi‐primary projection technology, is presented. A light source with a spectrum identical to that of the illumination is filtered by a positive linear combination of several color filters, which reproduces the reflectance spectra. The spectra of the color filters are tailored to span the space of possible surface spectra. Various methods for choosing the color filters vis‐à‐vis the required performance are discussed in detail. Soft‐proofing application is examined as a test case for the concept.  相似文献   

18.
郑作勇  姚莉  姚婷婷  马利庄 《软件学报》2006,17(Z1):176-183
利用普通的LCD投影仪和数码相机获得物体的三维形状.投影仪作为光源,投射线结构光在物体表面,从数码相机拍摄的照片序列中,恢复物体的形状.提出了根据已经标定好的相机参数,继续对投影仪进行标定的方法;同时,针对物体表面颜色/纹理较为复杂的情况,提出了一次投射三色线结构光的改进方法,克服了不易恢复此类物体形状的困难.实验结果表明,该方法可以获得物体的稠密点云用于进一步的重建,适于组建一个廉价但却保持相对精度的扫描系统.  相似文献   

19.
Text is a crucial component of 3‐D environments and virtual worlds for user interfaces and wayfinding. Implementing text using standard antialiased texture mapping leads to blurry and illegible writing which hinders usability and navigation. While super‐sampling removes some of these artifacts, distracting artifacts can still impede legibility, especially for recent high‐resolution head‐mounted displays. We propose an analytic antialiasing technique that efficiently computes the coverage of text glyphs, over pixel footprints, designed to run at real‐time rates. It decomposes glyphs into piecewise‐biquadratics and trapezoids that can be quickly area‐integrated over a pixel footprint to provide crisp legible antialiased text, even when mapped onto an arbitrary surface in a 3‐D virtual environment.  相似文献   

20.
Recent radiometric compensation techniques make it possible to project images onto colored and textured surfaces. This is realized with projector-camera systems by scanning the projection surface on a per-pixel basis. Using the captured information, a compensation image is calculated that neutralizes geometric distortions and color blending caused by the underlying surface. As a result, the brightness and the contrast of the input image is reduced compared to a conventional projection onto a white canvas. If the input image is not manipulated in its intensities, the compensation image can contain values that are outside the dynamic range of the projector. These will lead to clipping errors and to visible artifacts on the surface. In this article, we present an innovative algorithm that dynamically adjusts the content of the input images before radiometric compensation is carried out. This reduces the perceived visual artifacts while simultaneously preserving a maximum of luminance and contrast. The algorithm is implemented entirely on the GPU and is the first of its kind to run in real-time.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号