首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 23 毫秒
1.
Building a human‐centered editable world can be fully realized in a virtual environment. Both mixed reality (MR) and virtual reality (VR) are feasible solutions to support the attribute of edition. Based on the current development of MR and VR, we present the vision‐tangible interactive display method and its implementation in both MR and VR. We address the issue of MR and VR together because they are similar regarding the proposed method. The editable mixed and virtual reality system is useful for studies, which exploit it as a platform. In this paper, we construct a virtual reality environment based on the Oculus Rift, and an MR system based on a binocular optical see‐through head‐mounted display. In the MR system about manipulating the Rubik's cube, and the VR system about deforming the virtual objects, the proposed vision‐tangible interactive display method is utilized to provide users with a more immersive environment. Experimental results indicate that the vision‐tangible interactive display method can improve the user experience and can be a promising way to make the virtual environment better.  相似文献   

2.
Abstract— Augmented reality (AR) is a technology in which computer‐generated virtual images are dynamically superimposed upon a real‐world scene to enhance a user's perceptions of the physical environment. A successful AR system requires that the overlaid digital information be aligned with the user's real‐world senses — a process known as registration. An accurate registration process requires the knowledge of both the intrinsic and extrinsic parameters of the viewing device and these parameters form the viewing and projection transformations for creating the simulations of virtual images. In our previous work, an easy off‐line calibration method in which an image‐based automatic matching method was used to establish the world‐to‐image correspondences was presented, and it is able to achieve subpixel accuracy. However, this off‐line method yields accurate registration only when a user's eye placements relative to the display device coincides with locations established during the offline calibration process. A likely deviation of eye placements, for instance, due to helmet slippage or user‐dependent factors such as interpupillary distance, will lead to misregistration. In this paper, a systematic on‐line calibration framework to refine the off‐line calibration results and to account for user‐dependent factors is presented. Specifically, based on an equivalent viewing projection model, a six‐parameter on‐line calibration method to refine the user‐dependent parameters in the viewing transformations is presented. Calibration procedures and results as well as evaluation experiments are described in detail. The evaluation experiments demonstrate the improvement of the registration accuracy.  相似文献   

3.
The article describes the development and ergonomic evaluation of an augmented reality (AR) welding helmet. The system provides an augmented user interface with supporting information relevant to the welding process. The experimental studies focused on hand–eye coordination of welders and nonwelders with two prototypes of the augmented reality welding helmet. The first prototype operated at 16 frames per second, whereas the second, improved version had 20 frames per second. In addition, the hand–eye coordination while wearing the welding helmet with video see‐through head‐mounted display was compared to a performance with natural vision, without any helmet. Experimental results showed significant influence of helmet and occupation on hand–eye coordination. Subjective assessment revealed better rating for stereo perception for the system with the higher frame rate, whereas no significant difference in performance was found between the two frame rates. © 2007 Wiley Periodicals, Inc. Hum Factors Man 17: 317–330, 2007.  相似文献   

4.
Augmented reality (AR) display technology greatly enhances users' perception of and interaction with the real world by superimposing a computer‐generated virtual scene on the real physical world. The main problem of the state‐of‐the‐art 3D AR head‐mounted displays (HMDs) is the accommodation‐vergence conflict because the 2D images displayed by flat panel devices are at a fixed distance from the eyes. In this paper, we present a design for an optical see‐through HMD utilizing multi‐plane display technology for AR applications. This approach manages to provide correct depth information and solves the accommodation‐vergence conflict problem. In our system, a projector projects slices of a 3D scene onto a stack of polymer‐stabilized liquid crystal scattering shutters in time sequence to reconstruct the 3D scene. The polymer‐stabilized liquid crystal shutters have sub‐millisecond switching time that enables sufficient number of shutters to achieve high depth resolution. A proof‐of‐concept two‐plane optical see‐through HMD prototype is demonstrated. Our design can be made lightweight, compact, with high resolution, and large depth range from near the eye to infinity and thus holds great potential for fatigue‐free AR HMDs.  相似文献   

5.
Hand‐held devices are also becoming computationally more powerful and being equipped with special sensors and non‐traditional displays for diverse applications aside from just making phone calls. As such, it raises the question of whether realizing virtual reality, providing a minimum level of immersion and presence, might be possible on a hand‐held device capable of only relatively “small” display. In this paper, we propose that motion based interaction can widen the perceived field of view (FOV) more than the actual physical FOV, and in turn, increase the sense of presence and immersion up to a level comparable to that of a desktop or projection display based VR systems. We have implemented a prototype hand‐held VR platform and conducted two experiments to verify our hypothesis. Our experimental study has revealed that when a motion based interaction was used, the FOV perceived by the user for the small hand held device was significantly greater than (around 50%) the actual. Other larger display platforms using the conventional button or mouse/keyboard interface did not exhibit such a phenomenon. In addition, the level of user felt presence in the hand‐held platform was higher than or comparable to those in VR platforms with larger displays. We hypothesize that this phenomenon is related to and analogous to the way the human vision system compensates for differences in acuity resolution in the eye/retina through the saccadic activity. The paper demonstrates the distinct possibility of realizing reasonable virtual reality even with devices with a small visual FOV and limited processing power. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

6.
With ever‐increasing display resolution for wide field‐of‐view displays—such as head‐mounted displays or 8k projectors—shading has become the major computational cost in rasterization. To reduce computational effort, we propose an algorithm that only shades visible features of the image while cost‐effectively interpolating the remaining features without affecting perceived quality. In contrast to previous approaches we do not only simulate acuity falloff but also introduce a sampling scheme that incorporates multiple aspects of the human visual system: acuity, eye motion, contrast (stemming from geometry, material or lighting properties), and brightness adaptation. Our sampling scheme is incorporated into a deferred shading pipeline to shade the image's perceptually relevant fragments while a pull‐push algorithm interpolates the radiance for the rest of the image. Our approach does not impose any restrictions on the performed shading. We conduct a number of psycho‐visual experiments to validate scene‐ and task‐independence of our approach. The number of fragments that need to be shaded is reduced by 50 % to 80 %. Our algorithm scales favorably with increasing resolution and field‐of‐view, rendering it well‐suited for head‐mounted displays and wide‐field‐of‐view projection.  相似文献   

7.
In this paper, we present a simple and efficient method for correcting the asymmetric and nonlinear geometrical distortion of a head‐mounted display (HMD). The method divides the object space into a number of quadratic triangular elements and applies a quadratic predistortion for each image section so that it can handle the distortion with any complex shapes caused by the decentering and tilted optics and improve the calibration accuracy. The errors introduced in the process of fabricating and assembling can also be eliminated. We investigated the use of a quadratic division model and two types of linear division models on an off‐axis HMD to correct the optical distortion. Experimental results demonstrated that the quadratic division model converged at a faster rate and produced higher accuracy over the typical linear model. The average root of mean square error (RMSE) after distortion correction was one pixel, and the maximum standard deviations in all rows and columns were 0.67 pixel and 0.57 pixel. In addition, the deformation continuity is maintained by using the quadratic element with connected midside nodes.  相似文献   

8.
Floating three‐dimensional (3D) display implements direct interaction between human hands and virtual 3D images, which offers natural and effective augmented reality interaction. In this study, we propose a novel floating autostereoscopic display, combining head tracking lenticular display with an image projection system, to offer the observers with an accurate 3D image floating in midair without any optical elements between observers and the virtual 3D image. Combined with a gesture recognition device, the proposed system can achieve in situ augmented reality interaction with the floating 3D image. A distortion correction method is developed to achieve 3D display with accurate spatial information. Moreover, a coordinate calibration method is designed to improve the accuracy in the in situ interaction. Experiments were performed to prove the feasibility of the proposed system, and the good results show the potential of human‐computer interaction in medicine and life sciences.  相似文献   

9.
This paper presents an interactive digital painting system that allows a user to draw graffiti on a virtual 3D canvas with a digital spray can. The system visualizes a stereoscopic representation of the canvas by tracking the user's head. It also emulates real‐time spray painting by tracking the spray can in the user's hand as well as sensing the button pressure of the spray device. After painting a 3D object, the user can interact with the object on the display and see it flying in the 3D environment through a tracked head‐mounted display. As demonstrated in the results of our evaluation, we verified that the system resembles real graffiti in regard to a natural and realistic graffiti experience. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

10.
In virtual reality (VR) applications, the contents are usually generated by creating a 360° Video panorama of a real‐world scene. Although many capture devices are being released, getting high‐resolution panoramas and displaying a virtual world in real‐time remains challenging due to its computationally demanding nature. In this paper, we propose a real‐time 360° Video foveated stitching framework, that renders the entire scene in different level of detail, aiming to create a high‐resolution panoramic Video in real‐time that can be streamed directly to the client. Our foveated stitching algorithm takes Videos from multiple cameras as input, combined with measurements of human visual attention (i.e. the acuity map and the saliency map), can greatly reduce the number of pixels to be processed. We further parallelize the algorithm using GPU to achieve a responsive interface and validate our results via a user study. Our system accelerates graphics computation by a factor of 6 on a Google Cardboard display.  相似文献   

11.
We have developed head‐mounted displays with high transmittance and luminance, which could be utilized outside safely without dimming glasses. We first specified required optical performance specifications by considering user's safety and usability. In order to ensure that the user is able to recognize both surrounding environment and the image of the head‐mounted display, we set the target specification that the transmittance is higher than or equal to 85%, and the luminance contrast ratio is larger than or equal to 1.15 with display image of white solid pattern. We then developed the beam‐splitter‐array waveguide to achieve the requirements. It has advantages in high efficiency and high see‐through property. In order to determine configuration of the waveguide, we have performed optical ray trace simulation. We also established versatile waveguide measurement method applicable to different‐type waveguides. By utilizing the waveguide we have developed, we made a prototype of a head‐mounted display (HMD) with high transmittance 94% and high luminance 4.8 × 103 cd/m2 and thus luminance contrast ratio 1.25 under the sun. With these advantages, our HMD is suitable for usage outside including applications of work support systems where dimming effect is not preferred, and the HMD is used under the sun.  相似文献   

12.
This paper focuses on how virtual objects' shadows as well as differences in alignment between virtual and real lighting influence distance perception in optical see‐through (OST) augmented reality (AR). Four hypotheses are proposed: (H1) Participants underestimate distances in OST AR; (H2) Virtual objects' shadows improve distance judgment accuracy in OST AR; (H3) Shadows with different realism levels have different influence on distance perception in OST AR; (H4) Different levels of lighting misalignment between real and virtual lights have different influence on distance perception in OST AR scenes. Two experiments were designed with an OST head mounted display (HMD), the Microsoft HoloLens. Participants had to match the position of a virtual object displayed in the OST‐HMD with a real target. Distance judgment accuracy was recorded under the different shadows and lighting conditions. The results validate hypotheses H2 and H4 but surprisingly showed no impact of the shape of virtual shadows on distance judgment accuracy thus rejecting hypothesis H3. Regarding hypothesis H1, we detected a trend toward underestimation; given the high variance of the data, more experiments are needed to confirm this result. Moreover, the study also reveals that perceived distance errors and completion time of trials increase along with targets' distance.  相似文献   

13.
A system for assisting in microneurosurgical training and for delivering interactive mixed reality surgical experience live was developed and experimented in hospital premises. An interactive experience from the neurosurgical operating theater was presented together with associated medical content on virtual reality eyewear of remote users. Details of the stereoscopic 360‐degree capture, surgery imaging equipment, signal delivery, and display systems are presented, and the presence experience and the visual quality questionnaire results are discussed. The users reported positive scores on the questionnaire on topics related to the user experience achieved in the trial.  相似文献   

14.
Previous research has demonstrated a loss of helmet‐mounted display (HMD) legibility for users exposed to whole body vibration. A pair of human factors studies was conducted to evaluate the effect of whole body vibration on eye, head, and helmet movements for seated users of a HMD while conducting simple fixation and smooth pursuit tracking tasks. These experiments confirmed that vertical eye motion can be demonstrated, that is consistent with the human visual systems' response to the vestibular–ocular reflex (VOR). Helmet slippage was also shown to occur, which could exacerbate loss of display legibility. The largest amplitudes in eye movements were observed during exposure to sinusoidal vibration in the 4–6 Hz range, which is consistent with the frequencies that past research has associated with whole‐body resonance and the largest decrease in display legibility. Further, the measured eye movements appeared to be correlated with both the angular acceleration of the user's head and the angular slippage of the user's helmet. This research demonstrates that the loss of legibility while wearing HMDs likely results from a combination of VOR‐triggered eye movements and movement of the display. Future compensation algorithms should consider adjusting the display in response to both VOR‐triggered eye and HMD motion.  相似文献   

15.
A head‐mounted light field display based on integral imaging is considered as one of the promising methods that can render correct or nearly correct focus cues and address the well‐known vergence‐accommodation conflict problem in head‐mounted displays. Despite its great potential, it still suffers some of the same limitations of conventional integral imaging‐based displays such as low spatial resolution and crosstalk. In this paper, we present a prototype design using tunable lens and aperture array to render 3D scenes over a large depth range while maintaining high image quality and minimizing crosstalk. Experimental results verify and show that the proposed design could significantly improve the viewing experience.  相似文献   

16.
A method for determining the extents of a qualified viewing space (QVS) based on repeatable and reproducible luminance measurements of augmented and virtual reality near‐eye displays is described. This QVS mapping can also use other display performance metrics such as (1) Michelson contrast, (2) modulation transfer function, or (3) color as boundary condition parameters. We describe the use of a 4‐mm‐diameter entrance pupil, 1° to 2° field of view tele‐spectroradiometer, to determine the luminance and color uniformity of the virtual image. A 1‐mm‐diameter entrance pupil is used to map the QVS boundaries based on luminance at the center of the virtual image. The luminance measurement results from a pair of binocular augmented reality display glasses in three separate eye relief planes of the QVS of both eyes are presented. The data are further reduced to provide a perimeter profile of the QVS for the 50% of peak luminance boundary points in each eye relief plane.  相似文献   

17.
Performing typical network tasks such as node scanning and path tracing can be difficult in large and dense graphs. To alleviate this problem we use eye‐tracking as an interactive input to detect tasks that users intend to perform and then produce unobtrusive visual changes that support these tasks. First, we introduce a novel fovea based filtering that dims out edges with endpoints far removed from a user's view focus. Second, we highlight edges that are being traced at any given moment or have been the focus of recent attention. Third, we track recently viewed nodes and increase the saliency of their neighborhoods. All visual responses are unobtrusive and easily ignored to avoid unintentional distraction and to account for the imprecise and low‐resolution nature of eye‐tracking. We also introduce a novel gaze‐correction approach that relies on knowledge about the network layout to reduce eye‐tracking error. Finally, we present results from a controlled user study showing that our methods led to a statistically significant accuracy improvement in one of two network tasks and that our gaze‐correction algorithm enables more accurate eye‐tracking interaction.  相似文献   

18.
为了成功地应用增强现实,需要有观察仪器的准确投影信息,这就要求对所使用的光学透视式头盔显示器进行标定。针对实验室的光学透视式增强现实系统的组成特点,选定了虚拟摄像机(头盔显示器与人眼的结合)的标定算法。根据对投影矩阵的退化点结构的影响,改进了选取标定的对应点个数和用户头部的运动轨迹。实验结果表明,标定效果明显改善了,成功率也有明显提高。  相似文献   

19.
Text is a crucial component of 3‐D environments and virtual worlds for user interfaces and wayfinding. Implementing text using standard antialiased texture mapping leads to blurry and illegible writing which hinders usability and navigation. While super‐sampling removes some of these artifacts, distracting artifacts can still impede legibility, especially for recent high‐resolution head‐mounted displays. We propose an analytic antialiasing technique that efficiently computes the coverage of text glyphs, over pixel footprints, designed to run at real‐time rates. It decomposes glyphs into piecewise‐biquadratics and trapezoids that can be quickly area‐integrated over a pixel footprint to provide crisp legible antialiased text, even when mapped onto an arbitrary surface in a 3‐D virtual environment.  相似文献   

20.
Near‐eye light field displays based on integral imaging through a microlens array provide attractive features like ultra‐compact volume and freedom of the vergence‐accommodation conflict to head‐mounted displays with virtual or augmented reality functions. To enable optimal design and analysis of such systems, it is desirable to have a physical model that incorporates all factors that affect the image formation, including diffraction, aberration, defocusing, and pixel size. Therefore, in this study, using the fundamental Huygens‐Fresnel principle and the Arizona eye model with adjustable accommodation, we develop an image formation model that can numerically calculate the retinal light field image with near‐perfect accuracy, and experimentally verify it with a prototype system. Next, based on this model, the visual resolution is analyzed for different field of views (FOVs). As a result, a rapid resolution decay with respect to FOV caused by off‐axis aberration is demonstrated. Finally, resolution variations as a function of image depth are analyzed based on systems with different central depth planes. Significantly, the resolution decay is revealed to plateau when the image depth is large enough, which is different from real‐image type light field displays.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号