首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Ultimately, a display device should be capable of reproducing the visual effects observed in reality. In this paper we introduce an autostereoscopic display that uses a scalable array of digital light projectors and a projection screen augmented with microlenses to simulate a light field for a given three-dimensional scene. Physical objects emit or reflect light in all directions to create a light field that can be approximated by the light field display. The display can simultaneously provide many viewers from different viewpoints a stereoscopic effect without head tracking or special viewing glasses. This work focuses on two important technical problems related to the light field display; calibration and rendering. We present a solution to automatically calibrate the light field display using a camera and introduce two efficient algorithms to render the special multi-view images by exploiting their spatial coherence. The effectiveness of our approach is demonstrated with a four-projector prototype that can display dynamic imagery with full parallax.  相似文献   

2.
Dual layered display or also called tensor display that consists of two panels in a stack can present full‐parallax 3D images with high resolution and continuous motion parallax by reconstructing corresponding light ray field within a viewing angle. The depth range where the 3D images can be displayed with reasonable resolution, however, is limited around the panel stack. In this paper, we propose a dual layered display that can present stereoscopic images to multiple viewers located at arbitrary positions in observer space with high resolution and large depth range. Combined with the viewer tracking system, the proposed method provides a practical way to realize high‐resolution large‐depth auto‐stereoscopic 3D display for multiple observers without restriction on the observer position and the head orientation.  相似文献   

3.
The focused plenoptic camera differs from the traditional plenoptic camera in that its microlenses are focused on the photographed object rather than at infinity. The spatio‐angular tradeoffs available with this approach enable rendering of final images that have significantly higher resolution than those from traditional plenoptic cameras. Unfortunately, this approach can result in visible artifacts when basic rendering is used. In this paper, we present two new methods that work together to minimize these artifacts. The first method is based on careful design of the optical system. The second method is computational and based on a new lightfield rendering algorithm that extracts the depth information of a scene directly from the lightfield and then uses that depth information in the final rendering. Experimental results demonstrate the effectiveness of these approaches.  相似文献   

4.
Our hybrid display model combines multiple automultiscopic elements volumetrically to support horizontal and vertical parallax at a larger depth of field and better accommodation cues compared to single layer elements. In this paper, we introduce a framework to analyze the bandwidth of such display devices. Based on this analysis, we show that multiple layers can achieve a wider depth of field using less bandwidth compared to single layer displays. We present a simple algorithm to distribute an input light field to multiple layers, and devise an efficient ray tracing algorithm for synthetic scenes. We demonstrate the effectiveness of our approach by both software simulation and two corresponding hardware prototypes.  相似文献   

5.
Abstract— A multi‐view depth‐fused 3‐D (DFD) display that provides smooth motion parallax for wide viewing angles is proposed. A conventional DFD display consists of a stack of two transparent emitting screens. It can produce motion parallax for small changes in observation angle, but its viewing zone is rather narrow due to the split images it provides in inclined views. On the other hand, even though multi‐view 3‐D displays have a wide viewing angle, motion parallax in them is discrete, depending on the number of views they show. By applying a stacked structure to multi‐view 3‐D displays, a wide‐viewing‐angle 3‐D display with smooth motion parallax was fabricated. Experimental results confirmed the viewing‐zone connection of DFD displays while the calculated results show the feasibility of stacked multi‐view displays.  相似文献   

6.
Approach to achieve self‐calibration three‐dimensional (3D) light field display is investigated in this paper. The proposed 3D light field display is constructed up on spliced multi‐LCDs, lens and diaphragm arrays, and directional diffuser. The light field imaging principle, hardware configuration, diffuser characteristic, and image reconstruction simulation are described and analyzed, respectively. Besides the light field imaging, a self‐calibration method is proposed to improve the imaging performance. An image sensor is deployed to capture calibration patterns projected onto and then reflected by the polymer dispersed liquid crystal film, which is attached to and shaped the diffuser. These calibration components are assembled with the display unit and can be switched between display mode and calibration mode. In the calibration mode, the imperfect imaging relations of optical components are captured and calibrated automatically. We demonstrate our design by implementing the prototype of proposed 3D light field display by using modified off‐the‐shelf products. The proposed approach successfully meets the requirement of real application on scalable configuration, fast calibration, large viewing angular range, and smooth motion parallax.  相似文献   

7.
Abstract— The luminance distribution of autostereoscopic 3‐D displays using the parallax‐barrier method was simulated by two different calculation methods. The first method directly calculates the total luminance distribution by summing light rays coming from different positions on the imaging display through a parallax barrier at each eye position. The second method first calculates the angular distribution of light rays coming from the imaging display through a parallax barrier and then derives the spatial luminance distribution for each eye position. The two methods resulted in an equivalent distribution. Yet, the second method outperforms the first method in terms of calculating the speed and versatility.  相似文献   

8.
We studied the stereoscopic effect obtained from a two‐dimensional image without using binocular parallax, which we call “natural3D” (n3D). Unlike a parallax‐based three‐dimensional (3D) display system, n3D causes less tiredness and is free from a decrease of the resolution by half because of image division and viewing position dependence. To make the display with these effects comfortable to use, we conducted statistical tests with sensory evaluation experiments and a quantitative evaluation based on physiological responses. These examinations revealed that the n3D effect can be effectively obtained by using, for example, the characteristics of an organic light‐emitting diode display, such as high contrast and easy bendability. This study discusses optimal display curvatures for displays of different sizes that enhance n3D and reduce tiredness, which are revealed through statistical tests. In addition, we performed an experiment with a frame called an n3D window (n3Dw) that is placed before the display such that a subject views the display through the opening of the frame. We found that the combination of a curve and the n3Dw causes n3D more effectively.  相似文献   

9.
The plenoptic function is a ray‐based model for light that includes the colour spectrum as well as spatial, temporal and directional variation. Although digital light sensors have greatly evolved in the last years, one fundamental limitation remains: all standard CCD and CMOS sensors integrate over the dimensions of the plenoptic function as they convert photons into electrons; in the process, all visual information is irreversibly lost, except for a two‐dimensional, spatially varying subset—the common photograph. In this state‐of‐the‐art report, we review approaches that optically encode the dimensions of the plenoptic function transcending those captured by traditional photography and reconstruct the recorded information computationally.  相似文献   

10.
Projection and its time or spatially varying derivative, parallax, are a common thread through my work in computer graphics, stereoscopic display, human vision, and, most recently, light‐field display and capture. In this talk I'll present insights, anecdotes, and epiphanies I've accumulated along the way.  相似文献   

11.
We present novel methods to enhance Computer Generated Holography (CGH) by introducing a complex‐valued wave‐based occlusion handling method. This offers a very intuitive and efficient interface to introduce optical elements featuring physically‐based light interaction exhibiting depth‐of‐field, diffraction, and glare effects. Fur‐thermore, an efficient and flexible evaluation of lit objects on a full‐parallax hologram leads to more convincing images. Previous illumination methods for CGH are not able to change the illumination settings of rendered holo‐grams. In this paper we propose a novel method for real‐time lighting of rendered holograms in order to change the appearance of a previously captured holographic scene. These functionalities are features of a bigger wave‐based rendering framework which can be combined with 2D framebuffer graphics. We present an algorithm which uses graphics hardware to accelerate the rendering.  相似文献   

12.
We present a GPU accelerated volume ray casting system interactively driving a multi‐user light field display. The display, driven by a single programmable GPU, is based on a specially arranged array of projectors and a holographic screen and provides full horizontal parallax. The characteristics of the display are exploited to develop a specialized volume rendering technique able to provide multiple freely moving naked‐eye viewers the illusion of seeing and manipulating virtual volumetric objects floating in the display workspace. In our approach, a GPU ray‐caster follows rays generated by a multiple‐center‐of‐projection technique while sampling pre‐filtered versions of the dataset at resolutions that match the varying spatial accuracy of the display. The method achieves interactive performance and provides rapid visual understanding of complex volumetric data sets even when using depth oblivious compositing techniques.  相似文献   

13.
14.
Abstract— Although two‐view 3‐D displays requiring stereo glasses are on the market, the shape of objects they present is distorted when the observer's head moves. This problem can be solved by using a (passive) multi‐view 3‐D display because such a display can produce motion parallax. Another problem has to do with the surface quality of the presented object, but little is known about the fidelity of such displays as far as the surface quality goes. Previously, it was found that a two‐view 3‐D display has a problem in which glossiness deteriorates when the observer's head moves and that it can be alleviated by using a head tracker, whose data enables the display to produce correct motion parallax and luminance changes when the viewer's head moves. Here, it was determined whether this problem can be solved by using commercially available multi‐view 3‐D displays, whose finite number of viewpoints and certain amount of cross‐talk, however, make luminance changes inexact and smaller than they should be. It was found that this display can solve the problem to a certain extent.  相似文献   

15.
We present an immaterial display that uses a generalized form of depth-fused 3D (DFD) rendering to create unencumbered 3D visuals. To accomplish this result, we demonstrate a DFD display simulator that extends the established depth-fused 3D principle by using screens in arbitrary configurations and from arbitrary viewpoints. The feasibility of the generalized DFD effect is established with a user study using the simulator. Based on these results, we developed a prototype display using one or two immaterial screens to create an unencumbered 3D visual that users can penetrate, examining the potential for direct walk-through and reach-through manipulation of the 3D scene. We evaluate the prototype system in formative and summative user studies and report the tolerance thresholds discovered for both tracking and projector errors.  相似文献   

16.
Abstract— A circular camera system employing an image‐based rendering technique that captures light‐ray data needed for reconstructing three‐dimensional (3‐D) images by using reconstruction of parallax rays from multiple images captured from multiple viewpoints around a real object in order to display a 3‐D image of a real object that can be observed from multiple surrounding viewing points on a 3‐D display is proposed. An interpolation algorithm that is effective in reducing the number of component cameras in the system is also proposed. The interpolation and experimental results which were performed on our previously proposed 3‐D display system based on the reconstruction of parallax rays will be described. When the radius of the proposed circular camera array was 1100 mm, the central angle of the camera array was 40°, and the radius of a real 3‐D object was between 60 and 100 mm, the proposed camera system, consisting of 14 cameras, could obtain sufficient 3‐D light‐ray data to reconstruct 3‐D images on the 3‐D display.  相似文献   

17.
We present an algorithm that estimates dense planar-parallax motion from multiple uncalibrated views of a 3D scene. This generalizes the "plane+parallax" recovery methods to more than two frames. The parallax motion of pixels across multiple frames (relative to a planar surface) is related to the 3D scene structure and the camera epipoles. The parallax field, the epipoles, and the 3D scene structure are estimated directly from image brightness variations across multiple frames, without precomputing correspondences.  相似文献   

18.
This paper proposes a new image-based rendering (IBR) technique called "concentric mosaic" for virtual reality applications. IBR using the plenoptic function is an efficient technique for rendering new views of a scene from a collection of sample images previously captured. It provides much better image quality and lower computational requirement for rendering than conventional three-dimensional (3-D) model-building approaches. The concentric mosaic is a 3-D plenoptic function with viewpoints constrained on a plane. Compared with other more sophisticated four-dimensional plenoptic functions such as the light field and the lumigraph, the file size of a concentric mosaic is much smaller. In contrast to a panorama, the concentric mosaic allows users to move freely in a circular region and observe significant parallax and lighting changes without recovering the geometric and photometric scene models. The rendering of concentric mosaics is very efficient, and involves the reordering and interpolating of previously captured slit images in the concentric mosaic. It typically consists of hundreds of high-resolution images which consume a significant amount of storage and bandwidth for transmission. An MPEG-like compression algorithm is therefore proposed in this paper taking into account the access patterns and redundancy of the mosaic images. The compression algorithms of two equivalent representations of the concentric mosaic, namely the multiperspective panoramas and the normal setup sequence, are investigated. A multiresolution representation of concentric mosaics using a nonlinear filter bank is also proposed.  相似文献   

19.
Recent work have shown that it is possible to register multiple projectors on non‐planar surfaces using a single uncalibrated camera instead of a calibrated stereo pair when dealing with a special class of non‐planar surfaces, vertically extruded surfaces. However, this requires the camera view to contain the entire display surface. This is often an impossible scenario for large displays, especially common in visualization, edutainment, training and simulation applications. In this paper we present a new method that can achieve an accurate geometric registration even when the field‐of‐view of the uncalibrated camera can cover only a part of the vertically extruded display at a time. We pan and tilt the camera from a single point and employ a multi‐view approach to register the projectors on the display. This allows the method to scale easily both in terms of camera resolution and display size. To the best of our knowledge, our method is the first to achieve a scalable multi‐view geometric registration of large vertically extruded displays with a single uncalibrated camera. This method can also handle a different situation of having multiple similarly oriented cameras in different locations, if the camera focal length is known.  相似文献   

20.
Abstract— An integral imaging time‐division‐multiplexing 18‐view 3‐D display based on the one‐dimensional integral‐imaging (1‐D‐II) technique using a 9‐in. OCB‐LCD, lenticular sheet, and active shutter has been developed. By simulating a lens shape and a shutter structure and analyzing the light‐beam profile of the increasing‐parallax‐number region to find the best conditions, depth range, and viewing angle were an enhanced and a brighter and flicker‐less 3‐D image with smooth motion parallax was obtained.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号