首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Augmented reality (AR) display technology greatly enhances users' perception of and interaction with the real world by superimposing a computer‐generated virtual scene on the real physical world. The main problem of the state‐of‐the‐art 3D AR head‐mounted displays (HMDs) is the accommodation‐vergence conflict because the 2D images displayed by flat panel devices are at a fixed distance from the eyes. In this paper, we present a design for an optical see‐through HMD utilizing multi‐plane display technology for AR applications. This approach manages to provide correct depth information and solves the accommodation‐vergence conflict problem. In our system, a projector projects slices of a 3D scene onto a stack of polymer‐stabilized liquid crystal scattering shutters in time sequence to reconstruct the 3D scene. The polymer‐stabilized liquid crystal shutters have sub‐millisecond switching time that enables sufficient number of shutters to achieve high depth resolution. A proof‐of‐concept two‐plane optical see‐through HMD prototype is demonstrated. Our design can be made lightweight, compact, with high resolution, and large depth range from near the eye to infinity and thus holds great potential for fatigue‐free AR HMDs.  相似文献   

2.
The virtual reality (VR) and augmented reality (AR) applications have been widely used in a variety of fields; one of the key requirements in a VR or AR system is to understand how users perceive depth in the virtual environment and AR. Three different graphics depth cues are designed in shuffleboard to explore what kind of graphics depth cues are beneficial for depth perception. We also conduct a depth‐matching experiment to compare performance in VR and AR systems using an optical see‐through head‐mounted display (HMD). The result shows that the absolute error increases as the distance becomes farther. Analysis from the inverse of distance shows that box depth cues have a significant effect on depth perception, while the points depth cues and line depth cues have no significant effect. The error in diopter in AR experiment is lower than that in VR experiment. Participants in the AR experiment under medium illuminance condition have less error than those under low and high illuminance conditions. Men have less error than women in certain display conditions, but the advantage disappears when there is a strong depth cue. Besides, there is no significant effect of completion time on depth perception.  相似文献   

3.
ABSTRACT

The present paper introduces a near-future perception system called Previewed Reality. In a co-existence environment of a human and a robot, unexpected collisions between the human and the robot must be avoided to the extent possible. In many cases, the robot is controlled carefully so as not to collide with a human. However, it is almost impossible to perfectly predict human behavior in advance. On the other hand, if a user can determine the motion of a robot in advance, he/she can avoid a hazardous situation and exist safely with the robot. In order to ensure that a user perceives future events naturally, we developed a near-future perception system named Previewed Reality. Previewed Reality consists of an informationally structured environment, a VR display or an AR display, and a dynamics simulator. A number of sensors are embedded in an informationally structured environment, and information such as the position of furniture, objects, humans, and robots, is sensed and stored structurally in a database. Therefore, we can forecast possible subsequent events using a robot motion planner and a dynamics simulator and can synthesize virtual images from the viewpoint of the user, which will actually occur in the near future. The viewpoint of the user, which is the position and orientation of a VR display or an AR display, is also tracked by an optical tracking system in the informationally structured environment, or the SLAM technique on an AR display. The synthesized images are presented to the user by overlaying these images on a real scene using the VR display or the AR display. This system provides human-friendly communication between a human and a robotic system, and a human and a robot can coexist safely by intuitively showing the human possible hazardous situations in advance.  相似文献   

4.
增强现实综述   总被引:44,自引:0,他引:44       下载免费PDF全文
增强现实(augmented reality,AR)技术可以将虚拟的物体合并到现实场景中,并能支持用户与其进行交互,它已经成为虚拟现实研究中的一个重要领域,也是人机界面技术发展的一个重要方向。为了使人们对其有所了解,该文首先概略描述了这个领域的主要研究内容和进展情况,并详细介绍了增强现实中的支撑技术、开发工具和相关理论;然后针对当前AR应用的现状,分析了实现中的难点问题,并给出了与AR普及应用密切相关的一些系统框架和开发平台的描述,最后介绍了几个典型的AR应用实例。  相似文献   

5.
In order to create a satisfying experience with near‐eye displays, the content must be adapted to be legible on the display used. New displays are using subpixel arrangements that can limit the minimum resolvable feature size to something greater than with the conventional RGB stripe arrangement. We conducted an experiment to measure the minimum and preferred size of text in two virtual reality (VR) displays systems and find that the text size is display limited. We then measure several displays with different pixel arrangements to determine whether the subpixel arrangement could impact legibility. We propose several Fourier metrics that can be computed from the measured data to categorize the capability of the display and describe a framework for selecting the appropriate content from a set of discrete tiers.  相似文献   

6.
Hand‐held devices are also becoming computationally more powerful and being equipped with special sensors and non‐traditional displays for diverse applications aside from just making phone calls. As such, it raises the question of whether realizing virtual reality, providing a minimum level of immersion and presence, might be possible on a hand‐held device capable of only relatively “small” display. In this paper, we propose that motion based interaction can widen the perceived field of view (FOV) more than the actual physical FOV, and in turn, increase the sense of presence and immersion up to a level comparable to that of a desktop or projection display based VR systems. We have implemented a prototype hand‐held VR platform and conducted two experiments to verify our hypothesis. Our experimental study has revealed that when a motion based interaction was used, the FOV perceived by the user for the small hand held device was significantly greater than (around 50%) the actual. Other larger display platforms using the conventional button or mouse/keyboard interface did not exhibit such a phenomenon. In addition, the level of user felt presence in the hand‐held platform was higher than or comparable to those in VR platforms with larger displays. We hypothesize that this phenomenon is related to and analogous to the way the human vision system compensates for differences in acuity resolution in the eye/retina through the saccadic activity. The paper demonstrates the distinct possibility of realizing reasonable virtual reality even with devices with a small visual FOV and limited processing power. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

7.
增强现实技术是近年来的一个研究热点。增强现实是把计算机产生的虚拟物体或其他信息合成到用户感知的真实世界中的一种技术。它是对真实世界的补充,而不是完全替代真实世界。显示技术和跟踪注册技术是增强现实系统关键技术,也是研究的重点。本文简要介绍了显示技术以及跟踪注册技术。  相似文献   

8.
侯守明  贾超兰  张明敏 《计算机应用》2022,42(11):3534-3543
眼动人机交互利用眼动特点可以增强用户的沉浸感和提高舒适度,在虚拟现实(VR)系统中融入眼动交互技术对VR系统的普及起到至关重要的作用,已成为近年来的研究热点。对VR眼动交互技术的原理和类别进行阐述,分析了将VR系统与眼动交互技术结合的优势,归纳了目前市面上主流VR头显设备及典型的应用场景。在对有关VR眼动追踪相关实验分析的基础上,总结了VR眼动的研究热点问题,包括微型化设备、屈光度矫正、优质内容的匮乏、晕屏与眼球图像失真、定位精度、近眼显示系统,并针对相关的热点问题展望相应的解决方案。  相似文献   

9.
增强现实技术综述   总被引:2,自引:0,他引:2  
增强现实技术是将计算机渲染生成的虚拟场景与真实世界中的场景无缝融合起来的一种技术,它通过视频显示设备将虚实融合的场景呈现给用户,使人们与计算机之间的交互更加的自然,同时具有广泛的应用前景,因此成为近年来的一个研究热点。随着跟踪注册技术的进步、计算机性能的飞速发展、深度摄像机的普及,以及Light Field投影技术在增强现实中的应用,增强现实技术逐渐成为下一代人机交互的发展方向。该文章首先概述了增强现实的主要研究内容和发展情况,并详细介绍了增强现实的关键技术、开发工具,然后分类概述了增强现实应用案例。  相似文献   

10.
This survey provides an introduction into eye tracking visualization with an overview of existing techniques. Eye tracking is important for evaluating user behaviour. Analysing eye tracking data is typically done quantitatively, applying statistical methods. However, in recent years, researchers have been increasingly using qualitative and exploratory analysis methods based on visualization techniques. For this state‐of‐the‐art report, we investigated about 110 research papers presenting visualization techniques for eye tracking data. We classified these visualization techniques and identified two main categories: point‐based methods and methods based on areas of interest. Additionally, we conducted an expert review asking leading eye tracking experts how they apply visualization techniques in their analysis of eye tracking data. Based on the experts' feedback, we identified challenges that have to be tackled in the future so that visualizations will become even more widely applied in eye tracking research.  相似文献   

11.
Visual information can in principle be dynamically optimised by monitoring the user’s state of attention, e.g. by tracking eye movements. Gaze directed displays are therefore an important enabling technology for attention aware systems. We present a state-of-the-art review of both (1) techniques to register the direction of gaze and (2) display techniques that can be used to optimally adjust visual information presentation to the capabilities of the human visual system and the momentary direction of viewing. We focus particularly on evaluation studies that were performed to assess the added value of these displays. We identify promising application areas and directions for further research.  相似文献   

12.
Multiplane displays are capable of displaying 3D scenes with correct focus cues by creating multilayer 2D images in the display volume. Hence, such a 3D display technique could effectively address the accommodation‐vergence conflict (AVC) problem, which is prevalent in augmented reality (AR) displays. In this paper, we review the recent progress on multiplane AR displays based on liquid crystals (LCs) for AR applications. The working principle of multiplane AR displays is illustrated, the electro‐optical properties of the tunable LC devices are investigated and display prototypes are demonstrated. Finally, we discuss the prospects and challenges of multiplane AR displays based on LCs.  相似文献   

13.
Three-dimensional sound's effectiveness in virtual reality (VR) environments has been widely studied. However, due to the big differences between VR and augmented reality (AR) systems in registration, calibration, perceptual difference of immersiveness, navigation, and localization, it is important to develop new approaches to seamlessly register virtual 3-D sound in AR environments and conduct studies on 3-D sound's effectiveness in AR context. In this paper, we design two experimental AR environments to study the effectiveness of 3-D sound both quantitatively and qualitatively. Two different tracking methods are applied to retrieve the 3-D position of virtual sound sources in each experiment. We examine the impacts of 3-D sound on improving depth perception and shortening task completion time. We also investigate its impacts on immersive and realistic perception, different spatial objects identification, and subjective feeling of "human presence and collaboration". Our studies show that applying 3-D sound is an effective way to complement visual AR environments. It helps depth perception and task performance, and facilitates collaborations between users. Moreover, it enables a more realistic environment and more immersive feeling of being inside the AR environment by both visual and auditory means. In order to make full use of the intensity cues provided by 3-D sound, a process to scale the intensity difference of 3-D sound at different depths is designed to cater small AR environments. The user study results show that the scaled 3-D sound significantly increases the accuracy of depth judgments and shortens the searching task completion time. This method provides a necessary foundation for implementing 3-D sound in small AR environments. Our user study results also show that this process does not degrade the intuitiveness and realism of an augmented audio reality environment  相似文献   

14.
Abstract— A method for evaluation of the contrast of moving step‐grating patterns under smooth‐pursuit eye‐tracking conditions without imaging data acquisition and image analysis is introduced. Periodic optical responses of the display to a set of simple driving signals have been recorded at a fixed location, and the luminance vs. time data has been evaluated to obtain two types of contrast for characterization of the dynamical performance of the display under test: the frame‐convoluted contrast and the frame‐integrated contrast. The relation of this characterization with respect to modulation transfer functions from impulse responses and to the dynamic modulation transfer function from sine‐gratings is explained and discussed. The approach described here provides a detailed and comprehensive characterization of the dynamical properties of electronic displays including both extreme cases of step‐response and impulse‐response with quantities that are related to visual perception. With this type of evaluation, the visual resolution of displays can be described by the same characteristics in the static and the dynamic case. The method is attractive due to limited instrumental efforts and the transparent method of evaluation.  相似文献   

15.
Flow visualization is recognized as an essential tool for many scientific research fields and different visualization approaches are proposed. Several studies are also conducted to evaluate their effectiveness but these studies rarely examine the performance from the perspective of visual perception. In this paper, we aim at exploring how users’ visual perception is influenced by different 2D flow visualization methods. An eye tracker is used to analyze users’ visual behaviors when they perform the free viewing, advection prediction, flow feature detection, and flow feature identification tasks on the flow field images generated by different visualizations methods. We evaluate the illustration capability of five representative visualization algorithms. Our results show that the eye‐tracking‐based evaluation provides more insights to quantitatively analyze the effectiveness of these visualization methods.  相似文献   

16.
The arrival of near‐eye displays has challenged the traditional methods that have been used to measure the optical properties of displays. Near‐eye displays typically create virtual images and are designed for the relatively small entrance pupil of the human eye. These two attributes result in optical measurement requirements that are substantially different from traditional flat panel displays. This paper discusses the optical system requirements needed to make absolute radiometric and photometric measurements of near‐eye displays. These guidelines are contrasted with the performance of current optical measurement instruments. An initial study was conducted using traditional and modified instruments and exhibited a significant variance in the results with different near‐eye display designs. The study demonstrated that some traditional optical instruments can yield erroneous results when used to measure near‐eye displays. Generic optical system design concepts were used to interpret the experimental results and helped to identify how current commercial designs could be modified to properly measure near‐eye displays.  相似文献   

17.
Despite active research and significant progress in the last 30 years, eye detection and tracking remains challenging due to the individuality of eyes, occlusion, variability in scale, location, and light conditions. Data on eye location and details of eye movements have numerous applications and are essential in face detection, biometric identification, and particular human-computer interaction tasks. This paper reviews current progress and state of the art in video-based eye detection and tracking in order to identify promising techniques as well as issues to be further addressed. We present a detailed review of recent eye models and techniques for eye detection and tracking. We also survey methods for gaze estimation and compare them based on their geometric properties and reported accuracies. This review shows that, despite their apparent simplicity, the development of a general eye detection technique involves addressing many challenges, requires further theoretical developments, and is consequently of interest to many other domains problems in computer vision and beyond.  相似文献   

18.
The development of virtual reality (VR) art installations is faced with considerable difficulties, especially when one wishes to explore complex notions related to user interaction. We describe the development of a VR platform, which supports the development of such installations, from an art+science perspective. The system is based on a CAVE™-like immersive display using a game engine to support visualisation and interaction, which has been adapted for stereoscopic visualisation and real-time tracking. In addition, some architectural elements of game engines, such as their reliance on event-based systems have been used to support the principled definition of alternative laws of Physics. We illustrate this research through the development of a fully implemented artistic brief that explores the notion of causality in a virtual environment. After describing the hardware architecture supporting immersive visualisation we show how causality can be redefined using artificial intelligence technologies inspired from action representation in planning and how this symbolic definition of behaviour can support new forms of user experience in VR.  相似文献   

19.
In this paper, authors systematically selected and reviewed articles related to stereoscopic displays and their advances, with a special focus on perception, interaction, and corresponding challenges. The aim was to understand interaction‐related problems, provide possible explanations, and identify factors that limit their applications. Despite promising advancements, there are still issues that researchers in the field fail to explain precisely. The two major problems in stereoscopic viewing are, compared with the real world, objects are perceived to be smaller than they actually are and there are discomfort and visual syndromes. Furthermore, there is general agreement that humans underestimate their egocentric distance in a virtual environment (VE). Our analysis revealed that in the real world, distance estimation is about 94% accurate, but in VE, it is only about 80% accurate. This problem could reduce the efficacy of different sensory motor‐based applications where interaction is important. Experts from human factors, computing, psychology, and others have studied contributing factors such as types of perception/response method, quality of graphics, associated stereoscopic conditions, experience in virtual reality (VR), and distance signals. This paper discusses the factors requiring further investigation if the VR interaction is to be seamlessly realized. In addition, engineering research directions aiming at improving current interaction performances are recommended.  相似文献   

20.
Understanding the attentional behavior of the human visual system when visualizing a rendered 3D shape is of great importance for many computer graphics applications. Eye tracking remains the only solution to explore this complex cognitive mechanism. Unfortunately, despite the large number of studies dedicated to images and videos, only a few eye tracking experiments have been conducted using 3D shapes. Thus, potential factors that may influence the human gaze in the specific setting of 3D rendering, are still to be understood. In this work, we conduct two eye‐tracking experiments involving 3D shapes, with both static and time‐varying camera positions. We propose a method for mapping eye fixations (i.e., where humans gaze) onto the 3D shapes with the aim to produce a benchmark of 3D meshes with fixation density maps, which is publicly available. First, the collected data is used to study the influence of shape, camera position, material and illumination on visual attention. We find that material and lighting have a significant influence on attention, as well as the camera path in the case of dynamic scenes. Then, we compare the performance of four representative state‐of‐the‐art mesh saliency models in predicting ground‐truth fixations using two different metrics. We show that, even combined with a center‐bias model, the performance of 3D saliency algorithms remains poor at predicting human fixations. To explain their weaknesses, we provide a qualitative analysis of the main factors that attract human attention. We finally provide a comparison of human‐eye fixations and Schelling points and show that their correlation is weak.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号