首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The virtual image distance in an augmented reality (AR) or virtual reality (VR) device is an important factor in determining its performance. In this paper, a method for measuring the virtual image distance and its uniformity is proposed. The setup for measurement consists of a two-dimensional spatial sensor array on the translational stage and a pinhole array plate placed in front of the AR or VR device. As the distance between the pinhole plate and the two-dimensional spatial sensor array were changed, the positions of the rays through each pinhole were measured by a two-dimensional spatial sensor array. The ray trajectories through each pinhole were obtained by fitting these positions to straight lines. The relative distances between these ray trajectories were calculated with respect to the distance from the pinhole plate. This method is effective for measuring the uniformity, such as the azimuthal dependence of the virtual image distance or the dependence of the virtual image distance on the distance from the optical axis of the VR lens.  相似文献   

2.
Abstract— A new approach to resolution enhancement of an integral‐imaging (II) three‐dimensional display using multi‐directional elemental images is proposed. The proposed method uses a special lens made up of nine pieces of a single Fresnel lens which are collected from different parts of the same lens. This composite lens is placed in front of the lens array such that it generates nine sets of directional elemental images to the lens array. These elemental images are overlapped on the lens array and produce nine point light sources per each elemental lens at different positions in the focal plane of the lens array. Nine sets of elemental images are projected by a high‐speed digital micromirror device and are tilted by a two‐dimensional scanning mirror system, maintaining the time‐multiplexing sequence for nine pieces of the composite lens. In this method, the concentration of the point light sources in the focal plane of the lens array is nine‐times higher, i.e., the distance between two adjacent point light sources is three times smaller than that for a conventional II display; hence, the resolution of three‐dimensional image is enhanced.  相似文献   

3.
In projection-based Virtual Reality (VR) systems, typically only one headtracked user views stereo images rendered from the correct view position. For other users, who are presented a distorted image, moving with the first user's head motion, it is difficult to correctly view and interact with 3D objects in the virtual environment. In close-range VR systems, such as the Virtual Workbench, distortion effects are especially large because objects are within close range and users are relatively far apart. On these systems, multi-user collaboration proves to be difficult. In this paper, we analyze the problem and describe a novel, easy to implement method to prevent and reduce image distortion and its negative effects on close-range interaction task performance. First, our method combines a shared camera model and view distortion compensation. It minimizes the overall distortion for each user, while important user-personal objects such as interaction cursors, rays and controls remain distortion-free. Second, our method retains co-location for interaction techniques to make interaction more consistent. We performed a user experiment on our Virtual Workbench to analyze user performance under distorted view conditions with and without the use of our method. Our findings demonstrate the negative impact of view distortion on task performance and the positive effect our method introduces. This indicates that our method can enhance the multi-user collaboration experience on close-range, projection-based VR systems.  相似文献   

4.
Building a human‐centered editable world can be fully realized in a virtual environment. Both mixed reality (MR) and virtual reality (VR) are feasible solutions to support the attribute of edition. Based on the current development of MR and VR, we present the vision‐tangible interactive display method and its implementation in both MR and VR. We address the issue of MR and VR together because they are similar regarding the proposed method. The editable mixed and virtual reality system is useful for studies, which exploit it as a platform. In this paper, we construct a virtual reality environment based on the Oculus Rift, and an MR system based on a binocular optical see‐through head‐mounted display. In the MR system about manipulating the Rubik's cube, and the VR system about deforming the virtual objects, the proposed vision‐tangible interactive display method is utilized to provide users with a more immersive environment. Experimental results indicate that the vision‐tangible interactive display method can improve the user experience and can be a promising way to make the virtual environment better.  相似文献   

5.
Abstract— A circular camera system employing an image‐based rendering technique that captures light‐ray data needed for reconstructing three‐dimensional (3‐D) images by using reconstruction of parallax rays from multiple images captured from multiple viewpoints around a real object in order to display a 3‐D image of a real object that can be observed from multiple surrounding viewing points on a 3‐D display is proposed. An interpolation algorithm that is effective in reducing the number of component cameras in the system is also proposed. The interpolation and experimental results which were performed on our previously proposed 3‐D display system based on the reconstruction of parallax rays will be described. When the radius of the proposed circular camera array was 1100 mm, the central angle of the camera array was 40°, and the radius of a real 3‐D object was between 60 and 100 mm, the proposed camera system, consisting of 14 cameras, could obtain sufficient 3‐D light‐ray data to reconstruct 3‐D images on the 3‐D display.  相似文献   

6.
An experiment was conducted to investigate the accuracy of distance judgment in a frontal plane of projection‐based stereoscopic environments. The targets were presented at nine distinct frontal plane positions and at three depth levels. Eighteen right‐handed participants with self‐declared normal visual acuity reached the target, either continuously visible or presented briefly, by holding pointing sticks. All combinations of the experimental conditions were repeated for an equivalent real‐world environment. Accuracies of judgments were then computed from three‐dimensional data collected by a motion system composed of six infrared cameras. As compared with about 94% accuracy in the physical world, the space in the frontal plane was only about 85% on the stereoscopic environment. The result also revealed more accurate judgment in the continuous presence of the target. Furthermore, the accuracy was affected by the egocentric distance of the frontal plane from the position of the participant. The study concluded that the compression in the frontal plane and underestimation of depth, which has been reported by the majority of previous studies could be an indication that the space compression in virtual environments might be in all the three‐dimensions. These findings can be used as guidelines for developers and content designers to properly locate virtual targets and selection of efficient interaction modes.  相似文献   

7.
Floating three‐dimensional (3D) display implements direct interaction between human hands and virtual 3D images, which offers natural and effective augmented reality interaction. In this study, we propose a novel floating autostereoscopic display, combining head tracking lenticular display with an image projection system, to offer the observers with an accurate 3D image floating in midair without any optical elements between observers and the virtual 3D image. Combined with a gesture recognition device, the proposed system can achieve in situ augmented reality interaction with the floating 3D image. A distortion correction method is developed to achieve 3D display with accurate spatial information. Moreover, a coordinate calibration method is designed to improve the accuracy in the in situ interaction. Experiments were performed to prove the feasibility of the proposed system, and the good results show the potential of human‐computer interaction in medicine and life sciences.  相似文献   

8.
Abstract This study attempts to apply the principle of constructivism and virtual reality (VR) technologies to computer-aided design (CAD) curriculum by integrating network, CAD and VR into a web-based learning environment. Through VR technologies, it is expected that the traditional two-dimensional (2D) computer graphics course can be expanded into a three-dimensional (3D) real-time simulation one. VR technologies provide a novel method to enhance user visualisation of complex three-dimensional graphics and environments. Experience and environmental interaction allow users more readily to perceive the dimensional interrelations of graphics which are typically portrayed through static multiview or pictorial representations. A web-based learning system ( WebDeGrator ) has been developed to simulate a computer graphics learning system for learning. Future developments of the proposed web-based learning framework are also discussed.  相似文献   

9.
This article proposes a 3-dimensional (3D) vision-based ambient user interface as an interaction metaphor that exploits a user's personal space and its dynamic gestures. In human-computer interaction, to provide natural interactions with a system, a user interface should not be a bulky or complicated device. In this regard, the proposed ambient user interface utilizes an invisible personal space to remove cumbersome devices where the invisible personal space is virtually augmented through exploiting 3D vision techniques. For natural interactions with the user's dynamic gestures, the user of interest is extracted from the image sequences by the proposed user segmentation method. This method can retrieve 3D information from the segmented user image through 3D vision techniques and a multiview camera. With the retrieved 3D information of the user, a set of 3D boxes (SpaceSensor) can be constructed and augmented around the user; then the user can interact with the system by touching the augmented SpaceSensor. In the user's dynamic gesture tracking, the computational complexity of SpaceSensor is relatively lower than that of conventional 2-dimensional vision-based gesture tracking techniques, because the touched positions of SpaceSensor are tracked. According to the experimental results, the proposed ambient user interface can be applied to various systems that require real-time user's dynamic gestures for their interactions both in real and virtual environments.  相似文献   

10.
Abstract— The image quality of autostereoscopic 3‐D displays strongly depends on the user position. So, characterization of the spatial luminance distribution at the user position is important. For the measurement of the spatial luminance distribution, a method that places a diffuser screen at the user position and is illuminated by a 3‐D display has been investigated. By placing the diffuser screen and 3‐D display non‐parallel, the luminance distribution at the various distances can be determined. Though the accuracy of this measurement method is somewhat limited, the measuring procedure is fast and simple, compared with other time‐consuming methods.  相似文献   

11.
针对指静脉身份认证需求,以手指静脉图像采集系统作为研究对象,设计了基于单侧光源与反射镜面相结合红外光源可调控的指静脉图像采集系统。研究了LED光源位置与角度对静脉图像质量的影响,提出了基于图像质量评价的指静脉认证方法,并运用实测方法进行了验证。实验结果表明:可以得到与传统正面光源采集同等质量的静脉图像,具有更高的认证通过率,达到98.8%,并更易于用户使用。  相似文献   

12.
A new visual measurement method is proposed to estimate three-dimensional (3D) position of the object on the floor based on a single camera. The camera fixed on a robot is in an inclined position with respect to the floor. A measurement model with the camera’s extrinsic parameters such as the height and pitch angle is described. Single image of a chessboard pattern placed on the floor is enough to calibrate the camera’s extrinsic parameters after the camera’s intrinsic parameters are calibrated. Then the position of object on the floor can be computed with the measurement model. Furthermore, the height of object can be calculated with the paired-points in the vertical line sharing the same position on the floor. Compared to the conventional method used to estimate the positions on the plane, this method can obtain the 3D positions. The indoor experiment testifies the accuracy and validity of the proposed method.  相似文献   

13.
Virtual Reality technology, the complex environment can be a simulated, traditional sports courses at the stadium, equipment, it is due to limitations such as safety, to provide scientific and accurate teaching and learning's, a complex by visualizing the theoretical knowledge in the abstract sport, you can get more technical knowledge. Compared to the traditional physical education activities, simulate sports scene only by generating VR panoramas that cannot be virtual reality technology. Based on this, you can improve the enthusiasm and athletic ability of students effectively. Because of immaturity and high-priced equipment of VR technology, at present, people that education is using unpopular making, carefully virtual reality technology. In this context, we can apply virtual reality technology to sports training, which can obtain more effective training effect. In the first part of this paper, we talk about the research of virtual reality panorama, the second part about the research of virtual reality video, the third part about the application of virtual reality in physical education teaching, and the fourth part is the conclusion and investigation process. In the study, VR, as a new technology, found that you have the prospect of a wide range of applications. As long as it has been used in scientific and rational, you can VR is to promote the level of improvement and sports greatly sports scene. Users can increase users' interest; it can be immersed in a variety of preset virtual locations. Therefore, in the VR device, which allows the user to improve the experience of the VR device, you can interact well in a virtual scene.  相似文献   

14.
In this paper, we proposed a light field resampling method for generating elemental image array (EIA) of integral imaging. The proposed method can break through the constraints between parameters of traditional integral imaging record device and display device and allow to generate a new set of EIA suited to be displayed in an integral imaging display device with arbitrary parameters from any given recorded EIA. Three‐dimensional display results show the correctness and superiority of the proposed method.  相似文献   

15.
In conventional haptic devices for virtual reality (VR) systems, a user interacts with a scene by handling a tool (such as a pen) using a mechanical device (i.e. an end-effector-type haptic device). In the case that the device can ‘mimic’ a VR object, the user can interact directly with the VR object without the mechanical constraint of a device (i.e. an encounter-type haptic device). A new challenge of an encounter-type haptic device is displaying the visuals and haptic information simultaneously on a single device. We are proposing a new desk-top encounter-type haptic device with an actively driven pen-tablet LCD panel. The proposed device is capable of providing pseudo-3D visuals and haptic information on a single device. As the result, the system provides to the user a sense of interaction with a real object. To develop a proof-of-concept prototype, a compact parallel mechanism was developed and implemented. The aim of this research is to propose a new concept in haptic research. In this paper, the concept, the prototype, and some preliminary evaluation tests with the proposed system are presented.  相似文献   

16.
Integral imaging has three display modes including real mode, virtual mode, and focused mode, and each mode has unique display characteristics. In this paper, the accommodation responses to the three‐dimensional (3‐D) targets reconstructed by each mode of the integral imaging display were measured and statistically analyzed. Through the least square method, standard deviation analysis, and t‐test analysis, we found that the accommodation responses to the 3‐D target reconstructed by the focused mode was more similar to that of the real target on the same depth position. Moreover, the closer to the central depth plane was, the steadier accommodation response 3‐D targets could provide. These statistical analysis results are helpful to the design of integral imaging display device.  相似文献   

17.
A method for determining the extents of a qualified viewing space (QVS) based on repeatable and reproducible luminance measurements of augmented and virtual reality near‐eye displays is described. This QVS mapping can also use other display performance metrics such as (1) Michelson contrast, (2) modulation transfer function, or (3) color as boundary condition parameters. We describe the use of a 4‐mm‐diameter entrance pupil, 1° to 2° field of view tele‐spectroradiometer, to determine the luminance and color uniformity of the virtual image. A 1‐mm‐diameter entrance pupil is used to map the QVS boundaries based on luminance at the center of the virtual image. The luminance measurement results from a pair of binocular augmented reality display glasses in three separate eye relief planes of the QVS of both eyes are presented. The data are further reduced to provide a perimeter profile of the QVS for the 50% of peak luminance boundary points in each eye relief plane.  相似文献   

18.
Subjective visual vertical (SVV) assesses the ability to perceive verticality, which is a measure of vestibular otolithic function. Vestibular lesions influence this perception of verticality. We developed a method using virtual reality (VR) display and an Android software application named ‘Curator SVV’. The virtual reality SVV (Curator SVV) consisted of ten readily identifiable artworks projected by a Samsung phone S6 which is inserted into a virtual reality headset. In the first study, 20 patients had there SVV assessed with two devices: (1) a commercially available SVV measurement device (VestiTest®) and (2) a virtual reality SVV using the Curator SVV application. In a second study, 32 healthy subjects had their SVV assessed by the Curator SVV application whilst sitting in a chair. In the first study, there was no significant difference (p = 0.44, paired t test and p = 0.01, test of equivalence) between results obtained by Curator SVV and the commercially available device. In the second study, the average angle measured for healthy subjects was 0.00° ± 0.85°. The normal range (mean ± 2 SD) was ± 2° in standard upright position. We were able to demonstrate that the Curator SVV can be readily employed as an objective, non-invasive and affordable means of assessing otolith function in the clinical context. We validated this novel methodology by finding strong quantitative parity between a standard commercial SVV unit and the VR Curator SVV method. Our very lightweight and mobile device can be employed in clinical contexts including at the bedside and in different head and body positions.  相似文献   

19.
虚拟现实技术(VR)是一项以计算机技术为主,利用计算机等设备构建一个三维虚拟世界,并通过人与虚拟世界的交互。实现人与虚拟世界融为一体的技术。本文介绍了应用Java 3D开发的虚拟现实技术,给出了实现该技术场景创建、动画设计和交互设计技术的部分关键代码。该设计具有开发三维视觉、实现用户网络交互的能力,并已经被运用到远程电力监测系统中。该技术也可以被应用到类似的监控系统的开发中。  相似文献   

20.
随着信息社会定位技术的发展,光定位技术越来越受到人们的关注.本设计是一种基于可见光实现室内定位的新型装置.该设备使用3个白光LED作为光源,通过图像传感器采集信号,图像传感器成像法作为定位算法,检测并判定图像传感器的相对位置从而实现定位.在提供正常照明的条件下能够实现室内定位,不会产生其它干扰.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号