首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Floating three‐dimensional (3D) display implements direct interaction between human hands and virtual 3D images, which offers natural and effective augmented reality interaction. In this study, we propose a novel floating autostereoscopic display, combining head tracking lenticular display with an image projection system, to offer the observers with an accurate 3D image floating in midair without any optical elements between observers and the virtual 3D image. Combined with a gesture recognition device, the proposed system can achieve in situ augmented reality interaction with the floating 3D image. A distortion correction method is developed to achieve 3D display with accurate spatial information. Moreover, a coordinate calibration method is designed to improve the accuracy in the in situ interaction. Experiments were performed to prove the feasibility of the proposed system, and the good results show the potential of human‐computer interaction in medicine and life sciences.  相似文献   

2.
多视点自动立体显示有望成为今后主流的三维显示技术,它是一种无需借助任何辅助观察设备的多视点、多观察区、高分辨率、显示效果逼真的三维显示方式。阐述了基于多投影的多视点自动立体显示系统的设计原理,详细地描述了系统的软硬件构架,建立了基于多投影仪和水平光学各向异性显示结构的自动立体显示样机,开发了投影仪阵列自动校准系统,提高了投影仪的校准精度,避免了因投影仪数目多而导致的繁琐的校准过程。实验结果能够给观众带来逼真的三维视觉体验。  相似文献   

3.
为了满足多人异地进行真实感虚拟实验的需求,使用Kinect体感设备和Unity 3D引擎搭建了一个多人在线虚拟实验系统。在该系统中,使用Unity 3D引擎搭建虚拟实验场景,通过导入3D Max制作的实验器材模型进行实验搭建,并通过网络通信技术实现远距离多人在线操作。对于真实感部分,采用Kinect体感技术捕捉的身体姿势被用来控制虚拟场景中第一人称角色的走动、抓取和操作实验器材以及选取虚拟场景中的菜单。实验结果证明,Kinect姿势识别具有很高的准确性和鲁棒性,并且不容易被光照条件和复杂的背景所影响,服务器与客户端的通信对于建立远程虚拟实验系统来说足够稳定。该系统具有成本低、真实感较强的优点。  相似文献   

4.
Large-scale autostereoscopic three-dimensional (3D) displays can give audiences a truly immersive feeling with strong visual impact. However, the traditional autostereoscopic 3D display systems are limited by the display hardware, making it difficult to directly achieve large-scale 3D displays with high resolution. Multiscreen splicing with laser backlights can be used for large-scale and ultrahigh-resolution 3D display, but it normally results in subscreen image asynchronization, view zone error, or obvious edge overlapping. To solve the problems mentioned above, a distributed real-time rendering system for ultrahigh-resolution multiscreen 3D display is proposed. Fifteen 3D LCD display devices are driven through a host, cooperating with laser backlights, a lenticular lens array (LLA), and a directional diffuser to display high resolution, high frame rate, large size, and wide-viewing angle 3D images. The resolution of the whole display system can reach 23,040 × 21,600. The rendering system provides a large-scale and real-time 3D scene image with an ultrahigh-definition resolution at a speed of 40 frames per second and high quality.  相似文献   

5.
Virtual environments provide a whole new way of viewing and manipulating 3D data. Current technology moves the images out of desktop monitors and into the space immediately surrounding the user. Users can literally put their hands on the virtual objects. Unfortunately, techniques for interacting with such environments are yet to mature. Gloves and sensor-based trackers are unwieldy, constraining and uncomfortable to use. A natural, more intuitive method of interaction would be to allow the user to grasp objects with their hands and manipulate them as if they were real objects.We are investigating the use of computer vision in implementing a natural interface based on hand gestures. A framework for a gesture recognition system is introduced along with results of experiments in colour segmentation, feature extraction and template matching for finger and hand tracking, and simple hand pose recognition. Implementation of a gesture interface for navigation and object manipulation in virtual environments is presented.  相似文献   

6.
针对虚拟场景中的自然手势交互进行了研究,提出了基于Leap Motion的动态指尖手势轨迹识别方法。首先借助Leap Motion设备采集指尖在场景中运动时产生的坐标并同时对数据进行预处理,然后从这一系列坐标中找出起始和结束位置并提取出有效的手势轨迹,再进行轨迹优化和手势初步分类,基于加权欧氏距离将轨迹和手势模板进行相似度计算,得到识别结果。采集200组手势数据进行实验,结果证明提出的方法具有很高的识别率,将方法应用在手势交互系统中,实现使用自然手势和虚拟物品进行交互,增加了交互乐趣,改善了交互体验。  相似文献   

7.
全息透镜板的高精度拼接与装配是基于全息透镜技术的大屏幕LED裸眼3D显示系统搭建中的关键问题。理论计算与实验结果表明,全息透镜板与LED显示模组横向相对位置误差小于1.332mm时,可以满足显示的要求。基于裸眼3D显示系统的投射条纹,提出了基于投射条纹的全息透镜板位置实时调整方法。依据此方法提出了基于极大值测量条纹中心间距的图像处理算法,并结合LabView编写了图像处理程序。实验结果表明,使用该方法测得的亮暗条纹间距的测量精度为0.1mm,反算出全息透镜板与LED屏之间的位置误差小于0.03mm,满足实时调整全息透镜板位置的要求,可以作为全息透镜板在线拼接的检测方法。  相似文献   

8.
We have constructed a dialog environment between a human and a virtual agent. With commercial off-the-shelf VR technologies, special devices such as a data glove have to be used for the interaction, but it is difficult for anyone to manipulate objects on their own. If there is a helper who has direct access to objects in virtual space, we may ask them. The question, however, is how to communicate with the helper. The basic idea is to utilize speech and gesture recognition systems. We have already reported the above-mentioned result, although only the avatar can move a virtual object in the current system. The user cannot freely manipulate virtual objects. Therefore, in a new attempt, we constructed a communication channel between virtual space and the real world so that the virtual object could be manipulated. In order to develop the new system, we extended the existing system to an internet meeting system allowing users in different places to interact with each other by voice and by a pointing action with a finger. This work was presented in part and awarded as Young Author Award at the 13th International Symposium on Artificial Life and Robotics, Oita, Japan, January 31–February 2, 2008  相似文献   

9.
Abstract— This study proposes an interactive system for displays, the technologies of which consists of three main parts: hand‐gesture tracking, recognition, and depth measurement. The proposed interactive system can be applied to a general 3‐D display. In this interactive system, for hand‐gesture tracking, Haar‐like features are employed to detect a specific hand gesture to start tracking, while the mean‐shift algorithm and Kalman filter are adopted for fast tracking. First, for recognizing hand gestures, a principal component analysis (PCA) algorithm is used to localize colored areas of skin, and then hand gestures are identified by comparison with a prepared database. Second, a simple optical system is set up with an infrared laser source and a grid mask in order to project a proposed horizontal stripe pattern. Third, the projected patterns are deciphered to extract the depth information using the Hough‐transform algorithm. The system containing hand‐gesture localization, recognition, and associated depth detection (the distance between the display and the hand), was included in a prototype of an interactive display. Demonstration of rotation recognition of a finger‐pointing hand gesture was successful by using the algorithm of radar‐like scanning.  相似文献   

10.
Abstract— Stereoscopic and autostereoscopic projection‐display systems use projector arrays to present stereoscopic images, and each projector casts one parallax image of a stereoscopic scene. Because of the position shift of the projectors, the parallax images have geometric deformation, which influences the quality of the displayed stereoscopic images. In order to solve this problem, a method based on homography is proposed. The parallax images are pre‐transformed before they are projected, and then the stereoscopic images without geometric distortion can be obtained. An autostereoscopic projection‐display system is developed to present the images with and without calibration. Experimental results show that this method works effectively.  相似文献   

11.
Automated virtual camera control has been widely used in animation and interactive virtual environments. We have developed a multiple sparse camera based free view video system prototype that allows users to control the position and orientation of a virtual camera, enabling the observation of a real scene in three dimensions (3D) from any desired viewpoint. Automatic camera control can be activated to follow selected objects by the user. Our method combines a simple geometric model of the scene composed of planes (virtual environment), augmented with visual information from the cameras and pre-computed tracking information of moving targets to generate novel perspective corrected 3D views of the virtual camera and moving objects. To achieve real-time rendering performance, view-dependent textured mapped billboards are used to render the moving objects at their correct locations and foreground masks are used to remove the moving objects from the projected video streams. The current prototype runs on a PC with a common graphics card and can generate virtual 2D views from three cameras of resolution 768×576 with several moving objects at about 11 fps.  相似文献   

12.
Light field display (LFD) is considered as a promising technology to reconstruct the light rays’ distribution of the real 3D scene, which approximates the original light field of target displayed objects with all depth cues in human vision including binocular disparity, motion parallax, color hint and correct occlusion relationship. Currently, computer-generated content is widely used for the LFD system, therefore rich 3D content can be provided. This paper firstly introduces applications of light field technologies in display system. Additionally, virtual stereo content rendering techniques and their application scenes are thoroughly combed and pointed out its pros and cons. Moreover, according to the different characteristics of light field system, the coding and correction algorithms in virtual stereo content rendering techniques are reviewed. Through the above discussion, there are still many problems in the existing rendering techniques for LFD. New rendering algorithms should be introduced to solve the real-time light-field rendering problem for large-scale virtual scenes.  相似文献   

13.
Abstract— Autostereoscopic 3‐D display technologies enable a more immersive media experience by adding real depth to the visual content. However, the method used for the creation of a sensation of depth or stereo illusion contains several display design and content‐related issues that need to be carefully considered to maintain sufficient image quality. Conventionally, methods used for 3‐D image‐quality evaluations have been based on subjective testing. Optical measurements, in addition to subjective testing, can be used as an efficient tool for 3‐D display characterization. Objective characterization methods for autostereoscopic displays have been developed. How parameters affecting stereo image quality can be defined and measured, and how their effect on the stereo image quality can be evaluated have been investigated. Developed characterization methods are based on empirically gathered data. In this paper, previously presented methodology for two‐view displays is extended to cover autostereoscopic multiview displays. A distinction between displays where the change in content occurs in clear steps when the user moves in front of the display, and displays where the apparent movement of the objects is more continuous as a function of the head movement is made. Definitions for 3‐D luminance and luminance uniformity, which are equally important, as well as 3‐D crosstalk, which is the dominant factor in the evaluations of the perceived 3‐D image quality, is focused upon.  相似文献   

14.
ABSTRACT

This research compares the way the image of a product included within a rendered scene shown on an autostereoscopic 3D display is rated versus the same image shown in a 2D display. The purpose is to understand the observer's preferences and to determine the features that a composition should have to highlight the product and to make its presentation more attractive to observers, thereby helping designers and advertisers who use both displays to prepare images to make them more effective when visually presenting a product.

The results show that observers like the images on autostereoscopic 3D displays slightly more than those presented by means of 2D displays. On both displays the product is perceived more quickly when it is larger than the other elements and is shown with greater chromatic contrast, but a composition is seen as more attractive when the chromatic relationship between all the elements is more harmonious.  相似文献   

15.
A hand posture recognition system using 3D data is described. The system relies on a novel 3D sensor that generates a dense range image of the scene. The main advantage of the proposed system, compared to other gesture recognition techniques, is the capability for robust unconstrained recognition of complex hand postures such as those encountered in sign language alphabets. This is achieved by explicitly utilizing 3D hand geometry. Moreover, the proposed approach does not rely on color information, and guarantees robust segmentation of the hand under varying illumination conditions, and scene content. Several novel 3D image analysis algorithms are presented, covering the complete processing chain: 3D image acquisition, arm segmentation, hand–forearm segmentation, hand pose estimation, 3D feature extraction, and gesture classification. The proposed system is extensively evaluated.  相似文献   

16.
利用Kinect相机结合增强现实技术和手势识别方法设计并实现了一个弓弦乐器虚拟演奏系统——以二胡为例.将Kinect获取的现实场景和虚拟乐器融合在一起绘制成增强现实场景.通过Kinect得到的深度数据和贝叶斯肤色模型将用户的左手分割出来,并再次绘制在增强图像上形成新的图像,从而解决虚拟演奏场景中的虚实遮挡问题.利用基于反向动力学和马尔可夫模型的三维虚拟手势拟合方法,对演奏过程中的左手手势进行识别,并结合右手的运动状态完成乐器的虚拟演奏.  相似文献   

17.
自由立体显示技术及其发展   总被引:5,自引:0,他引:5       下载免费PDF全文
自由立体显示技术是指不需佩戴立体眼镜等附属设备的3维立体显示技术.它可分为全息立体技术、3维集成成像技术、体3维显示技术和基于视差的自由立体显示技术.其中体3维显示技术和基于水平视差的自由立体技术发展很快.本文首先论述了静态体3维显示技术和扫射体3维显示技术两类不同的体3维显示技术近几年来的研究进展;然后从所使用的光学光栅类型及复用方法上综述了基于视差的自由立体显示技术的现状和特点;最后对比了体3维显示技术和基于水平视差的自由立体技术优缺点.  相似文献   

18.
Abstract— Multi‐view spatial‐multiplexed autostereoscopic 3‐D displays normally use a 2‐D image source and divide the pixels to generate perspective images. Due to the reduction in the resolution of each perspective image for a large view number, a super‐high‐resolution 2‐D image source is required to achieve 3‐D image quality close to the standard of natural vision. This paper proposes an approach by tiling multiple projection images with a low magnification ratio from a microdisplay to resolve the resolution issue. Placing a lenticular array in front of the tiled projection image can lead to an autostereoscopic display. Image distortion and cross‐talk issues resulting from the projection lens and pixel structure of the microdisplay have been addressed with proper selection of the active pixel and adequate pixel grouping and masking. Optical simulation has shown that a 37‐in. 12‐view autostereoscopic display with a full‐HD (1920 × 1080) resolution can be achieved with the proposed 3‐D architecture.  相似文献   

19.
Virtual holography is a disruptive technology that can inspire innovations in variety of fields through blending the physical world with sensory-rich virtual world. The technology enables users to naturally and intuitively manipulate objects and navigate in 3D space. The zSpace virtual holography platform is described. The platform provides a 3D display and head-tracking technology that transforms PCs into virtual-holographic computing facility using a stereoscopic user interface with an interactive stylus. The major components of the platform and its potential benefits, along with some of the current applications are briefly described.The use of the zSpace platform to explore and manipulate a number of space system models is outlined. The models considered include Titan Saturn system, a joint NASA/ESA mission; Mars Science Lab concept; James Webb Telescope; and a Lunar Lander concept. In each of the applications considered, the platform is enhanced by using multimodal interactions, and providing support for multiuser collaboration. The multimodal interactions, which enable more engaging, enhanced accessibility, are achieved by fusing information from a set of stylus input, 3D gestures, and neural input. Simultaneous and collaborative multiuser interactions are described, which support both local and distributed teams, using variety of displays.Emerging and future enhancements of the zSpace platform are outlined, along with novel future applications.  相似文献   

20.
Traditional display systems usually display 3D objects on static screens (monitor, wall, etc.) and the manipulation of virtual objects by the viewer is usually achieved via indirect tools such as keyboard or mouse. It would be more natural and direct if we display the object onto a handheld surface and manipulate it with our hands as if we were holding the real 3D object. In this paper, we propose a prototype system by projecting the object onto a handheld foam sphere. The aim is to develop an interactive 3D object manipulation and exhibition tool without the viewer having to wear spectacles. In our system, the viewer holds the sphere with his hands and moves it freely. Meanwhile we project well-tailored images onto the sphere to follow its motion, giving the viewer a virtual perception as if the object were sitting inside the sphere and being moved by the viewer. The design goal is to develop a low-cost, real-time, and interactive 3D display tool. An off-the-shelf projector-camera pair is first calibrated via a simple but efficient algorithm. Vision-based methods are proposed to detect the sphere and track its subsequent motion. The projection image is generated based on the projective geometry among the projector, sphere, camera and the viewer. We describe how to allocate the view spot and warp the projection image. We also present the result and the performance evaluation of the system.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号