首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Current 3D crosstalk equation was defined from the characteristics of 3D display using glasses. This equation is not suitable for multi‐view 3D display with larger view number as it gives the inappropriately large value. In 3D display using eyeglass, double images occur at large depth. But, in multi‐view 3D display with larger view number, blur occurs to larger width for the larger depth. Hence, blur phenomenon of multi‐view 3D display was investigated to understand the unique characteristics of multi‐view 3D display. For this purpose, ray tracing S/W was used to simulate 3D display image seen at the designed viewing distance, to calculate the relative luminance distribution, and to quantify the relation between blur and depth. Calculated results showed that incomplete image separation caused the overlap of multiple view images and the blur. Blur edge width (BEW) was proportional to the horizontal disparity and related to the depth. BEWR = (BEW) / (binocular disparity) was newly defined, and its usefulness for 3D characterization was investigated. BEW and BEWR might be useful as new measuring items to characterize multi‐view 3D display regarding 3D crosstalk.  相似文献   

2.
Abstract— To estimate the qualified viewing spaces for two‐ and multi‐view autostereoscopic displays, the relationship between image quality (image comfort, annoying ghost image, depth perception) and various pairings between 3‐D cross‐talk in the left and right views are studied subjectively using a two‐view autostereoscopic display and test charts for the left and right views with ghost images due to artificial 3‐D cross‐talk. The artificial 3‐D cross‐talk was tuned to simulate the view in the intermediate zone of the viewing spaces. It was shown that the stereoscopic images on a two‐view autostereoscopic display cause discomfort when they are observed by the eye in the intermediate zone between the viewing spaces. This is because the ghost image due to large 3‐D cross‐talk in the intermediate zone elicits different depth perception from the depth induced by the original images for the left and right views, so the observer's depth perception is confused. Image comfort is also shown to be better for multi‐views, especially the width of the viewing space, which is narrower than the interpupillary distance, where the parallax of the cross‐talking image is small.  相似文献   

3.
由于现在的昆虫电子标本过于单一化并不能从多角度清晰的观察,因此通过上位机软件来控制单片机发出脉冲信号,实现伺服电机旋转昆虫标本以及微距相机自动对焦,360°自动拍摄标本,从而建立高清的原始图像库系统。该系统通过缩略图全景显示、高清图像动态加载、捕获并处理用户消息的方法,实现了昆虫三维标本影像模型的多角度三维观察效果。该系统具有标本批量采集、实时三维观察、高清显示细节的特点,为昆虫的教学和辨别昆虫提供了数据支持从而满足人们的需要。  相似文献   

4.
In this paper we present a novel technique for easily calibrating multiple casually aligned projectors on spherical domes using a single uncalibrated camera. Using the prior knowledge of the display surface being a dome, we can estimate the camera intrinsic and extrinsic parameters and the projector to display surface correspondences automatically using a set of images. These images include the image of the dome itself and a projected pattern from each projector. Using these correspondences we can register images from the multiple projectors on the dome. Further, we can register displays which are not entirely visible in a single camera view using multiple pan and tilted views of an uncalibrated camera making our method suitable for displays of different size and resolution. We can register images from any arbitrary viewpoint making it appropriate for a single head‐tracked user in a 3D visualization system. Also, we can use several cartographic mapping techniques to register images in a manner that is appropriate for multi‐user visualization. Domes are known to produce a tremendous sense of immersion and presence in visualization systems. Yet, till date, there exists no easy way to register multiple projectors on a dome to create a high‐resolution realistic visualizations. To the best of our knowledge, this is the first method that can achieve accurate geometric registration of multiple projectors on a dome simply and automatically using a single uncalibrated camera.  相似文献   

5.
Abstract— Although there are numerous types of floating‐image display systems which can project three‐dimensional (3‐D) images into real space through a convex lens or a concave mirror, most of them provide only one image plane in space to the observer; therefore, they lack an in‐depth feeling. In order to enhance a real 3‐D feeling of floating images, a multi‐plane floating display is required. In this paper, a novel two‐plane electro‐floating display system using 3‐D integral images is proposed. One plane for the object image is provided by an electro‐floating display system, and the other plane for the background image is provided with the 3‐D integral imaging system. Consequently, the proposed two‐plane electro‐floating display system, having a 3‐D background, can provide floated images in front of background integral images resulting in a different perspective to the observer. To show the usefulness of the proposed system, experiments were carried out and their results are presented. In addition, the prototype was practically implemented and successfully tested.  相似文献   

6.
Abstract— A depth‐map estimation method, which converts two‐dimensional images into three‐dimensional (3‐D) images for multi‐view autostereoscopic 3‐D displays, is presented. The proposed method utilizes the Scale Invariant Feature Transform (SIFT) matching algorithm to create the sparse depth map. The image boundaries are labeled by using the Sobel operator. A dense depth map is obtained by using the Zero‐Mean Normalized Cross‐Correlation (ZNCC) propagation matching method, which is constrained by the labeled boundaries. Finally, by using depth rendering, the parallax images are generated and synthesized into a stereoscopic image for multi‐view autostereoscopic 3‐D displays. Experimental results show that this scheme achieves good performances on both parallax image generation and multi‐view autostereoscopic 3‐D displays.  相似文献   

7.
Abstract— Techniques for 3‐D display have evolved from stereoscopic 3‐D systems to multiview 3‐D systems, which provide images corresponding to different viewpoints. Currently, new technology is required for application in multiview display systems that use input‐source formats such as 2‐D images to generate virtual‐view images of multiple viewpoints. Due to the changes in viewpoints, occlusion regions of the original image become disoccluded, resulting in problems related to the restoration of output image information that is not contained in the input image. In this paper, a method for generating multiview images through a two‐step process is proposed: (1) depth‐map refinement and (2) disoccluded‐area estimation and restoration. The first step, depth‐map processing, removes depth‐map noise, compensates for mismatches between RGB and depth, and preserves the boundaries and object shapes. The second step, disoccluded‐area estimation and restoration, predicts the disoccluded area by using disparity and restores information about the area by using information about neighboring frames that are most similar to the occlusion area. Finally, multiview rendering generates virtual‐view images by using a directional rendering algorithm with boundary blending.  相似文献   

8.
We propose a novel method to handle thin structures in Image‐Based Rendering (IBR), and specifically structures supported by simple geometric shapes such as planes, cylinders, etc. These structures, e.g. railings, fences, oven grills etc, are present in many man‐made environments and are extremely challenging for multi‐view 3D reconstruction, representing a major limitation of existing IBR methods. Our key insight is to exploit multi‐view information. After a handful of user clicks to specify the supporting geometry, we compute multi‐view and multi‐layer alpha mattes to extract the thin structures. We use two multi‐view terms in a graph‐cut segmentation, the first based on multi‐view foreground color prediction and the second ensuring multiview consistency of labels. Occlusion of the background can challenge reprojection error calculation and we use multiview median images and variance, with multiple layers of thin structures. Our end‐to‐end solution uses the multi‐layer segmentation to create per‐view mattes and the median colors and variance to create a clean background. We introduce a new multi‐pass IBR algorithm based on depth‐peeling to allow free‐viewpoint navigation of multi‐layer semi‐transparent thin structures. Our results show significant improvement in rendering quality for thin structures compared to previous image‐based rendering solutions.  相似文献   

9.
A perspective image represents the spatial relationships of objects in a scene as they appear from a single viewpoint. In contrast, a multi‐perspective image combines what is seen from several viewpoints into a single image. Despite their incongruity of view, effective multi‐perspective images are able to preserve spatial coherence and can depict, within a single context, details of a scene that are simultaneously inaccessible from a single view, yet easily interpretable by a viewer. In computer vision, multi‐perspective images have been used for analysing structure revealed via motion and generating panoramic images with a wide field‐of‐view using mirrors. In this STAR, we provide a practical guide on topics in multi‐perspective modelling and rendering methods and multi‐perspective imaging systems. We start with a brief review of multi‐perspective image techniques frequently employed by artists such as the visual paradoxes of Escher, the Cubism of Picasso and Braque and multi‐perspective panoramas in cel animations. We then characterize existing multi‐perspective camera models, with an emphasis on their underlying geometry and image properties. We demonstrate how to use these camera models for creating specific multi‐perspective rendering effects. Furthermore, we show that many of these cameras satisfy the multi‐perspective stereo constraints and we demonstrate several multi‐perspective imaging systems for extracting 3D geometry for computer vision. The participants learn about topics in multi‐perspective modelling and rendering for generating compelling pictures for computer graphics and in multi‐perspective imaging for extracting 3D geometry for computer vision. We hope to provide enough fundamentals to satisfy the technical specialist without intimidating curious digital artists interested in multi‐perspective images. The intended audience includes digital artists, photographers and computer graphics and computer vision researchers using or building multi‐perspective cameras. They will learn about multi‐perspective modelling and rendering, along with many real world multi‐perspective imaging systems.  相似文献   

10.
Abstract— This work is related to static volumetric crystals which scintillate light when two laser beams are intersected within the crystal. The geometry in this crystal is optimized for linear slices. Most volumetric displays are based on rotational surfaces, which generate the images, while the projected images are sliced in a rotational sweep mode. To date, the majority of 3‐D graphic engines based on static‐volume displays have not been fully developed. To use an advanced 3‐D graphic engine designed for a swept‐volume display (SVD) with a static‐volume display, the display must emulate the operation of a SVD based on a rotational‐slicing approach. The CSpace® 3‐D display has the capability to render 3‐D images using the rotational‐slicing approach. This paper presents the development of a rotational‐slicing approach designed to emulate the operation of a SVD within the image volume of a static‐volume display. The display software has been modified to divide the 3‐D image into 46 slices, each passing through the image center and rotated at a fixed angle from the previous slice. Reconstructed 3‐D images were demonstrated using a rotational‐slicing approach. Suggestions are provided for future implementations that could aid in the elimination of elongations and distortions, which occur within specified slices.  相似文献   

11.
Abstract— A circular camera system employing an image‐based rendering technique that captures light‐ray data needed for reconstructing three‐dimensional (3‐D) images by using reconstruction of parallax rays from multiple images captured from multiple viewpoints around a real object in order to display a 3‐D image of a real object that can be observed from multiple surrounding viewing points on a 3‐D display is proposed. An interpolation algorithm that is effective in reducing the number of component cameras in the system is also proposed. The interpolation and experimental results which were performed on our previously proposed 3‐D display system based on the reconstruction of parallax rays will be described. When the radius of the proposed circular camera array was 1100 mm, the central angle of the camera array was 40°, and the radius of a real 3‐D object was between 60 and 100 mm, the proposed camera system, consisting of 14 cameras, could obtain sufficient 3‐D light‐ray data to reconstruct 3‐D images on the 3‐D display.  相似文献   

12.
Abstract— The viewing freedom of the reduced‐view super multi‐view (SMV) display was analyzed. It was found that there are separate multiple viewing ranges in the depth direction; thus, a technique that selects an appropriate viewing range to increase the longitudinal viewing freedom has been developed. Pixels of a flat‐panel display viewed by the viewer's eyes through a lenticular lens were determined from three‐dimensional (3‐D) positions of the viewer's eyes, which were obtained using an eye‐tracking system that employed a stereo camera. Parallax images corresponding to the 3‐D positions of the viewer's eyes were generated, which were displayed by the determined pixels. The experimental results show that the proposed technique successfully increased the longitudinal viewing freedom. It is also shown that a video camera was able to focus on the produced SMV images.  相似文献   

13.
We present a novel multi‐view, projective texture mapping technique. While previous multi‐view texturing approaches lead to blurring and ghosting artefacts if 3D geometry and/or camera calibration are imprecise, we propose a texturing algorithm that warps (“floats”) projected textures during run‐time to preserve crisp, detailed texture appearance. Our GPU implementation achieves interactive to real‐time frame rates. The method is very generally applicable and can be used in combination with many image‐based rendering methods or projective texturing applications. By using Floating Textures in conjunction with, e.g., visual hull rendering, light field rendering, or free‐viewpoint video, improved rendering results are obtained from fewer input images, less accurately calibrated cameras, and coarser 3D geometry proxies.  相似文献   

14.
Image‐based rendering (IBR) techniques allow capture and display of 3D environments using photographs. Modern IBR pipelines reconstruct proxy geometry using multi‐view stereo, reproject the photographs onto the proxy and blend them to create novel views. The success of these methods depends on accurate 3D proxies, which are difficult to obtain for complex objects such as trees and cars. Large number of input images do not improve reconstruction proportionally; surface extraction is challenging even from dense range scans for scenes containing such objects. Our approach does not depend on dense accurate geometric reconstruction; instead we compensate for sparse 3D information by variational image warping. In particular, we formulate silhouette‐aware warps that preserve salient depth discontinuities. This improves the rendering of difficult foreground objects, even when deviating from view interpolation. We use a semi‐automatic step to identify depth discontinuities and extract a sparse set of depth constraints used to guide the warp. Our framework is lightweight and results in good quality IBR for previously challenging environments.  相似文献   

15.
Augmented reality (AR) display technology greatly enhances users' perception of and interaction with the real world by superimposing a computer‐generated virtual scene on the real physical world. The main problem of the state‐of‐the‐art 3D AR head‐mounted displays (HMDs) is the accommodation‐vergence conflict because the 2D images displayed by flat panel devices are at a fixed distance from the eyes. In this paper, we present a design for an optical see‐through HMD utilizing multi‐plane display technology for AR applications. This approach manages to provide correct depth information and solves the accommodation‐vergence conflict problem. In our system, a projector projects slices of a 3D scene onto a stack of polymer‐stabilized liquid crystal scattering shutters in time sequence to reconstruct the 3D scene. The polymer‐stabilized liquid crystal shutters have sub‐millisecond switching time that enables sufficient number of shutters to achieve high depth resolution. A proof‐of‐concept two‐plane optical see‐through HMD prototype is demonstrated. Our design can be made lightweight, compact, with high resolution, and large depth range from near the eye to infinity and thus holds great potential for fatigue‐free AR HMDs.  相似文献   

16.
A camera‐free 3D air‐touch system was proposed. Hovering, air swiping, and 3D gestures for further interaction with the floated 3D images on the mobile display were demonstrated. By embedding multiwavelength optical sensors into the display pixels and adding angular‐scanning illuminators with multiwavelength on the edge of the display, the flat panel can sense images reflected by a bare finger from different heights. In addition, three axis (x, y, z) information of the reflected image of the fingertip can be calculated. Finally, the proposed 3D air‐touch system was successfully demonstrated on a 4‐inch mobile 3D display.  相似文献   

17.
Abstract— A high‐resolution autostereoscopic 3‐D projection display with a polarization‐control space dividing the iris‐plane liquid‐crystal shutter is proposed. The polarization‐control iris‐plane shutter can control the direction of stereo images without reducing the image quality of the microdis‐play. This autostereoscopic 3‐D projection display is 2‐D/3‐D switchable and has a high resolution and high luminance. In addition, it has no cross‐talk between the left and right viewing zones, a simple structure, and the capability to show multi‐view images.  相似文献   

18.
We introduce a novel method for enabling stereoscopic viewing of a scene from a single pre‐segmented image. Rather than attempting full 3D reconstruction or accurate depth map recovery, we hallucinate a rough approximation of the scene's 3D model using a number of simple depth and occlusion cues and shape priors. We begin by depth‐sorting the segments, each of which is assumed to represent a separate object in the scene, resulting in a collection of depth layers. The shapes and textures of the partially occluded segments are then completed using symmetry and convexity priors. Next, each completed segment is converted to a union of generalized cylinders yielding a rough 3D model for each object. Finally, the object depths are refined using an iterative ground fitting process. The hallucinated 3D model of the scene may then be used to generate a stereoscopic image pair, or to produce images from novel viewpoints within a small neighborhood of the original view. Despite the simplicity of our approach, we show that it compares favorably with state‐of‐the‐art depth ordering methods. A user study was conducted showing that our method produces more convincing stereoscopic images than existing semi‐interactive and automatic single image depth recovery methods.  相似文献   

19.
3维全景图像技术是一种能够记录和显示全真3维场景的图像技术。该技术采用微透镜阵列记录空间场景,空间任意一点的深度信息只需通过一次成像即可直接获得。本文研究采用全景图像技术直接获取物体空间信息的方法。此方法首先从全景图像中抽提视图。视图是通过抽提全景图像中对应于每个微透镜下同一局部位置的点人工合成的。每幅视图包含了全景图像中对原来的物空间场景按照某一特定方向的平行投影记录信息。接下来通过分析全景图像的光学成像过程。推导了用来描述物体深度信息和其在对应的视图间的视差关系的深度方程。从而得出空间任一点的深度可以通过其在对应视图间的视差来求得。最后,通过运用全景图像测量火柴盒的厚度的实例,验证了这一方法的可行性。其结果一方面可用于全景图像的数据处理本身,另一方面可望为开发新型的深度测量工具提供理论依据。  相似文献   

20.
The progression in the field of stereoscopic imaging has resulted in impressive 3D videos. This technology is now used for commercial and entertainment purposes and sometimes even for medical applications. Currently, it is impossible to produce quality anaglyph video using a single camera under different moving and atmospheric conditions with the corresponding depth, local colour, and structural information. The proposed study challenges the previous researches by introducing single camera based method for anaglyph reconstruction and it mainly concentrates on human visual perception, where as the previous methods used dual camera, depth sensor, multi view, which demand not only long duration they also suffer from photometric distortion due to variation in angular alignment. This study also contributes clear individual image without any occlusion with another image. We use an approach based on human vision to determine the corresponding depth information. The source frames are shifted slightly in opposite directions as the distance between the pupils increases. We integrate the colour components of the shifted frames to generate contrasting colours for each one of the marginally shifted frames. The colour component images are then reconstructed as a cyclopean image. We show the results of our method by applying it to quickly varying video sequences and compare its performance to other existing methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号