首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Navigation within a closed environment requires analysis of a variety of acoustic cues, a task that is well developed in many visually impaired individuals, and for which sighted individuals rely almost entirely on visual information. For blind people, the act of creating cognitive maps for spaces, such as home or office buildings, can be a long process, for which the individual may repeat various paths numerous times. While this action is typically performed by the individual on-site, it is of some interest to investigate at which point this task can be performed off-site, at the individual's discretion. In short, is it possible for an individual to learn an architectural environment without being physically present? If so, such a system could prove beneficial for navigation preparation in new and unknown environments. The main goal of the present research can therefore be summarized as investigating the possibilities of assisting blind individuals in learning a spatial environment configuration through the listening of audio events and their interactions with these events within a virtual reality experience. A comparison of two types of learning through auditory exploration has been performed: in situ real displacement and active navigation in a virtual architecture. The virtual navigation rendered only acoustic information. Results for two groups of five participants showed that interactive exploration of virtual acoustic room simulations can provide sufficient information for the construction of coherent spatial mental maps, although some variations were found between the two environments tested in the experiments. Furthermore, the mental representation of the virtually navigated environments preserved topological and metric properties, as was found through actual navigation.  相似文献   

2.
For some applications based on virtual reality technology, presence and task performance are important factors to validate the experience. Different approaches have been adopted to analyse the extent to which certain aspects of a computer-generated environment may enhance these factors, but mainly in 2D graphical user interfaces. This study explores the influence of different sensory modalities on performance and the sense of presence experienced within a 3D environment. In particular, we have evaluated visual, auditory and active haptic feedback for indicating selection of virtual objects. The effect of spatial alignment between proprioceptive and visual workspaces (co-location) has also been analysed. An experiment has been made to evaluate the influence of these factors in a controlled 3D environment based on a virtual version of the Simon game. The main conclusions obtained indicate that co-location must be considered in order to determine the sensory needs during interaction within a virtual environment. This study also provides further evidence that the haptic sensory modality influences presence to a higher extent, and that auditory cues can reduce selection times. Conclusions obtained provide initial guidelines that will help designers to set out better selection techniques for more complex environments, such as training simulators based on VR technology, by highlighting different optimal configurations of sensory feedback.  相似文献   

3.
Most of the information used by people for the cognitive mapping of spaces is gathered through the visual channel. People who are blind lack the ability to collect the required visual information either in advance or in situ. This study was based on the assumption that the acquisition of appropriate spatial information (perceptual and conceptual) through compensatory sensorial channels (e.g., haptic) within a virtual environment simulating a real target space may assist people who are blind in their anticipatory exploration and cognitive mapping of the unknown space. The two main goals of the study were: (a) the development of a haptic-based multi-sensory virtual environment enabling the exploration of an unknown space and (b) the study of the cognitive mapping process of the space by people who are blind working with the multi-sensory virtual environment. The findings suggest strong evidence that the work within the multi-sensory virtual environment provided a robust foundation for the participants’ development of comprehensive cognitive maps of the unknown space.  相似文献   

4.
Currently, interactive data exploration in virtual environments is mainly focused on vision-based and non-contact sensory channels such as visual/auditory displays. The lack of tactile sensation in virtual environments removes an important source of information to be delivered to the users. In this paper, we propose the touch-enabled haptic modeling of deformable multi-resolution surfaces in real time. The 6-DOF haptic manipulation is based on a dynamic model of Loop surfaces, where the dynamic parameters are computed easily without subdividing the control mesh recursively. A local deforming scheme is developed to approximate the solution of the dynamics equations, thus the order of the linear equations is reduced greatly. During each of the haptic interaction loop, the contact point is traced and reflected to the rendering of updated graphics and haptics. The sense of touch against the deforming surface is calculated according to the surface properties and the damping-spring force profile. Our haptic system supports the dynamic modeling of deformable Loop surfaces intuitively through the touch-enabled interactive manipulation.  相似文献   

5.
虚拟体空间中的触觉雕刻   总被引:5,自引:0,他引:5  
陈辉  孙汉秋 《计算机学报》2002,25(9):994-1000
目前,在虚拟环境中大多数的信息获取是通过视觉、听觉等非接触感觉获得的。然而缺乏触觉反馈的信息减少了很大一部分的信息源。在看和听之外,能够触摸、感觉和操纵物体,在很大程度上提高了虚拟环境的真实性。该文研究了触觉绘制的基本模型,提出了采用虚平面作为中介实现体数据的实时触觉绘制。并在此基础上探讨了体的局部变形及结合触觉反馈模型,实现了具有触觉反馈的虚拟雕刻交互系统。该系统可应用于融化、燃烧、印记、构造和着色实时交互操作。  相似文献   

6.
The implementation of haptic interfaces in vehicles has important safety and flexibility implications for lessening visual and auditory overload during driving. The present study aims to design and evaluate haptic interfaces with vehicle seats. Three experiments were conducted by testing a haptic seat in a simulator with a total of 20 participants. The first experiment measured reaction time, subjective satisfaction, and subject workloads of the haptic, visual, and auditory displays for the four signals primarily used by vehicle navigation systems. The second experiment measured reaction time, subjective satisfaction, and subjective workloads of the haptic, auditory, and multimodal (haptic + auditory) displays for the ringing signal used by in-vehicle Bluetooth hands-free systems. The third experiment measured drivers' subjective awareness, urgency, usefulness, and disturbance levels at various vibration intensities and positions for a haptic warning signal used by a driver drowsiness warning system. The results indicated that haptic seat interfaces performed better than visual and auditory interfaces, but the unfamiliarity of the haptic interface caused a lower subjective satisfaction for some criteria. Generally, participants showed high subjective satisfaction levels and low subjective workloads toward haptic seat interfaces. This study provided guidance for implementing haptic seat interfaces and identified the possible benefits of their use. It is expected that haptic seats implemented in vehicles will improve safety and the interaction between driver and vehicle.  相似文献   

7.
This paper describes an approach to design tactile haptic signals that help humans “visualize” an environment through the use of a vibrotactile haptic wristband that has four vibration motors. A human response map to tactile input while sitting was determined experimentally. It shows the zones where humans can classify signals with a high success rate based on minimum Duration of Stimulus (DOS) (“on” periods) and “off” periods of the haptic signals. It was also shown experimentally that a human’s ability to recognize tactile patterns depends on the level of engagement required by the activity. This paper provides an approach to predict a human response map for various activities. The map during sitting is used to design the signals to send information to a human. Two types of signals are developed: sequence stimuli and digital codes. Sequence stimuli create an on/off rhythm for the vibration motors that humans can sense directly without a decoding process. Experiments show that humans can recognize 10 levels of sequence stimuli with a success rate greater than 80%. This class of signals is useful for applications where information must be repeated frequently, e.g., range information sent to a human parking a car. The second class of signals is digital codes, similar to Morse code, where a sequence of long and short motor DOS represents each code. The meaning of the signal is associated with a specific code. From 27 digital codes, experiments showed a successful recognition rate of 78.7%. An application for the digital code method is to pick specific menu items, based on the codes, for fast food restaurants.  相似文献   

8.
Virtual learning environments can now be enriched not only with visual and auditory information, but also with tactile and kinesthetic feedback. However, the way to successfully integrate haptic feedback on a multimodal learning environment is still unclear. This study aims to provide guidelines on how visuohaptic simulations can be implemented effectively, thus the research question is: Under what conditions do visual and tactile information support students' development of conceptual learning of force‐related concepts? Participants comprised 170 undergraduate students of a Midwestern University enrolled in a physics for elementary education class. Four experiments were conducted using four different configurations of multimodal learning environments: Visual feedback only, haptic force feedback only, visual and haptic force feedback at the same time, and sequenced modality of haptic feedback first and visual feedback second. Our results suggest that haptic force feedback has the potential to enrich learning when compared with visual only environments. Also, haptic and visual modalities interact better when sequenced one after another rather than presented simultaneously. Finally, exposure to virtual learning environments enhanced by haptic forced feedback was a positive experience, but the ease of use and ease of interpretation was not so evident.  相似文献   

9.
Immersion and interaction are two key features of virtual reality systems, which are especially important for medical applications. Based on the requirement of motor skill training in dental surgery, haptic rendering method based on triangle model is investigated in this paper. Multi-rate haptic rendering architecture is proposed to solve the contradiction between fidelity and efficiency requirements. Realtime collision detection algorithm based on spatial partition and time coherence is utilized to enable fast contact determination. Proxy-based collision response algorithm is proposed to compute surface contact point. Cutting force model based on piecewise contact transition model is proposed for dental drilling simulation during tooth preparation. Velocity-driven levels of detail haptic rendering algorithm is proposed to maintain high update rate for complex scenes with a large number of triangles. Hapticvisual collocated dental training prototype is established using half-mirror solution. Typical dental operations have been realized including dental caries exploration, detection of boundary within dental cross-section plane, and dental drilling during tooth preparation. The haptic rendering method is a fundamental technology to improve immersion and interaction of virtual reality training systems, which is useful not only in dental training, but also in other surgical training systems. Supported by National Natural Science Foundation of China (Grant Nos. 60605027, 50575011), National High-Tech Research & Development Program of China (Grant No. 2007AA01Z310)  相似文献   

10.
Haptic feedback is an important component of immersive virtual reality (VR) applications that is often suggested to complement visual information through the sense of touch. This paper investigates the use of a haptic vest in navigation tasks. The haptic vest produces a repulsive vibrotactile feedback from nearby static virtual obstacles that augments the user spatial awareness. The tasks require the user to perform complex movements in a 3D cluttered virtual environment, like avoiding obstacles while walking backwards and pulling a virtual object. The experimental setup consists of a room-scale environment. Our approach is the first study where a haptic vest is tracked in real time using a motion capture device so that proximity-based haptic feedback can be conveyed according to the actual movement of the upper body of the user.User study experiments have been conducted with and without haptic feedback in virtual environments involving both normal and limited visibility conditions. A quantitative evaluation was carried out by measuring task completion time and error (collision) rate. Multiple haptic rendering techniques have also been tested. Results show that under limited visibility conditions proximity-based haptic feedback generated by a wearable haptic vest can significantly reduce the number of collisions with obstacles in the virtual environment.  相似文献   

11.
Passivity of systems comprising a continuous time plant and discrete time controller is considered. This topic is motivated by stability considerations arising in the control of robots and force-reflecting human interfaces (“haptic interfaces”). Necessary conditions for passivity are found via a small gain condition, and sufficient conditions are found via an application of Parseval's theorem and a sequence of frequency domain manipulations. An example—implementation of a “virtual wall” via a one degree-of-freedom haptic interface—is given and discussed in some detail. © 1997 John Wiley & Sons, Inc.  相似文献   

12.
Most human-computer interactive systems focus primarily on the graphical rendering of visual information and, to a lesser extent, on the display of auditory information. Haptic interfaces have the potential to increase the quality of human-computer interaction by accommodating the sense of touch. They provide an attractive augmentation to visual display and enhance the level of understanding of complex data sets. A haptic rendering system generates contact or restoring forces to prevent penetration into the virtual objects and create a sense of touch. The system computes contact forces by first detecting if a collision or penetration has occurred. Then, the system determines the (projected) contact points on the model surface. Finally, it computes restoring forces based on the amount of penetration. Researchers have recently investigated the problem of rendering the contact forces and torques between 3D virtual objects. This problem is known as six-degrees-of-freedom (6-DOF) haptic rendering, as the computed output includes both 3-DOF forces and 3-DOF torques. This article presents an overview of our work in this area. We suggest different approximation methods based on the principle of preserving the dominant perceptual factors in haptic exploration.  相似文献   

13.
This paper examines what can be learned about bodies of literature using a concept mapping tool, Leximancer. Statistical content analysis and concept mapping were used to analyse bodies of literature from different domains in three case studies. In the first case study, concept maps were generated and analysed for two closely related document sets—a thesis on language games and the background literature for the thesis. The aim for the case study was to show how concept maps might be used to analyse related document collections for coverage. The two maps overlapped on the concept of “language”; however, there was a stronger focus in the thesis on “simulations” and “agents.” Other concepts were not as strong in the thesis map as expected. The study showed how concept maps can help to establish the coverage of the background literature in a thesis. In the second case study, three sets of documents from the domain of conceptual and spatial navigation were collected, each discussing a separate topic: navigational strategies, the brain's role in navigation, and concept mapping. The aim was to explore emergent patterns in a set of related concept maps that may not be apparent from reading the literature alone. Separate concept maps were generated for each topic and also for the combined set of literature. It was expected that each of the topics would be situated in different parts of the combined map, with the concept of “navigation” central to the map. Instead, the concept of “spatial” was centrally situated and the areas of the map for the brain and for navigational strategies overlaid the same region. The unexpected structure provided a new perspective on the coverage of the documents. In the third and final case study, a set of documents on sponges—a domain unfamiliar to the reader—was collected from the Internet and then analysed with a concept map. The aim of this case study was to present how a concept map could aid in quickly understanding a new, technically intensive domain. Using the concept map to identify significant concepts and the Internet to look for their definitions, a basic understanding of key terms in the domain was obtained relatively quickly. It was concluded that using concept maps is effective for identifying trends within documents and document collections, for performing differential analysis on documents, and as an aid for rapidly gaining an understanding in a new domain by exploring the local detail within the global scope of the textual corpus.  相似文献   

14.
Preserving older pedestrians’ navigation skills in urban environments is a challenge for maintaining their quality of life. However, existing aids do not take into account older people’s perceptual and cognitive declines nor their user experience, and they call upon sensory modalities that are already used during walking. The present study was aimed at comparing different guidance instructions using visual, auditory, and haptic feedback in order to identify the most efficient and best accepted one(s). Sixteen middle-age (non-retired) adults, 21 younger-old (young-retired) adults, and 21 older-old (old-retired) adults performed a navigation task in a virtual environment. The task was performed with visual feedback (directional arrows superimposed on the visual scenes), auditory feedback (sounds in the left/right ear), haptic feedback (vibrotactile information delivered by a wristband), combinations of different types of sensory feedback, or a paper map. The results showed that older people benefited from the sensory guidance instructions, as compared to the paper map. Visual and auditory feedbacks were associated with better performance and user experience than haptic feedback or the paper map, and the benefits were the greatest among the older-old participants, even though the paper-map familiarity was appreciated. Several recommendations for designing pedestrian navigation aids are proposed.  相似文献   

15.
With the advent of new haptic feedback devices, researchers are giving serious consideration to the incorporation of haptic communication in collaborative virtual environments. For instance, haptic interactions based tools can be used for medical and related education whereby students can train in minimal invasive surgery using virtual reality before approaching human subjects. To design virtual environments that support haptic communication, a deeper understanding of humans′ haptic interactions is required. In this paper, human′s haptic collaboration is investigated. A collaborative virtual environment was designed to support performing a shared manual task. To evaluate this system, 60 medical students participated to an experimental study. Participants were asked to perform in dyads a needle insertion task after a training period. Results show that compared to conventional training methods, a visual-haptic training improves user′s collaborative performance. In addition, we found that haptic interaction influences the partners′ verbal communication when sharing haptic information. This indicates that the haptic communication training changes the nature of the users′ mental representations. Finally, we found that haptic interactions increased the sense of copresence in the virtual environment: haptic communication facilitates users′ collaboration in a shared manual task within a shared virtual environment. Design implications for including haptic communication in virtual environments are outlined.  相似文献   

16.
Mental mapping of spaces is essential for the development of efficient orientation and mobility skills. Most of the information required for this mental mapping is gathered through the visual channel. People who are blind lack this information, and in consequence, they are required to use compensatory sensorial channels and alternative exploration methods. In this study, people who are blind use a virtual environment (VE) that provides haptic and audio feedback to explore an unknown space. The cognitive mapping of the space based on the VE and the subject's ability to apply this map to accomplish tasks in the real space are examined. Results show clearly that a robust and comprehensive map is constructed, contributing to successful performance in real space tasks.  相似文献   

17.
In the last 30 years, the evolution of digital data processing in terms of processing power, storage capacity, and algorithmic efficiency in the simulation of physical phenomena has allowed the emergence of the discipline known as computational fluid dynamics or CFD. More recently, virtual reality (VR) systems have proven an interesting alternative to conventional user interfaces, in particular, when exploring complex and massive datasets, such as those encountered in scientific visualization applications. Unfortunately, all too often, VR technologies have proven unsatisfactory in providing a true added value compared to standard interfaces, mostly because insufficient attention was given to the activity and needs of the intended user audience. The present work focuses on the design of a multimodal VR environment dedicated to the analysis of non-stationary flows in CFD. Specifically, we report on the identification of relevant strategies of CFD exploration coupled to adapted VR data representation and interaction techniques. Three different contributions will be highlighted. First, we show how placing the CFD expert user at the heart of the system is accomplished through a formalized analysis of work activity and through system evaluation. Second, auditory outputs providing analysis of time-varying phenomena in a spatialized virtual environment are introduced and evaluated. Finally, specific haptic feedbacks are designed and evaluated to enhance classical visual data exploration of CFD simulations.  相似文献   

18.
为应对传统计算机算法教学中理论知识存在逻辑性强、抽象程度高、教学与实验脱节、缺乏交互性等现实问题,本文借助虚拟现实技术,基于江西知名旅游胜地庐山的三维场景,使用Unity3D引擎设计图算法虚拟仿真系统。该系统实现5种图算法的仿真实验过程,每种图算法均提供“自动展示”和“用户交互”这2种运行模式,还提供用户进入景点(对应图的结点)分场景的自由控制视角浏览庐山景观的功能;同时,探讨本虚拟仿真系统存在的理论问题,并给出解决这些问题的关键技术和实施方案;最后,通过Prim最小生成树算法验证本虚拟仿真系统的实用性和灵活性。与传统的算法讲授、个性化问题驱动教学方式相比,本文设计的图算法虚拟仿真系统具有趣味性、交互性、沉浸性,既能激发学生学习的探索性和主动性,又为算法与数据结构课程提供了一种新的教学和实验方法。  相似文献   

19.
A reverse engineering method based on haptic volume removing   总被引:1,自引:0,他引:1  
This paper presents a new reverse engineering methodology that is based on haptic volume removing. When a physical object is to be digitized, it is first buried in a piece of virtual clay that is generated with the help of a fixture. Now digitizing the physical object is by simply chipping away the virtual clay with a position tracker that is attached to a haptic device PHANToM®. While chipping away the clay, the user can see on the computer monitor what is emerging and at the same time feel the chipping force from the haptic device. By so doing, reverse engineering is seamlessly integrated into haptic volume sculpting that is now widely used for conceptual design. Furthermore, the proposed method has eliminated the need to merge point clouds that are digitized from different views using current digitizers. The virtual clay volume is represented by a spatial run-length encoding scheme. A prototype system has been developed to demonstrate the feasibility of the proposed new method through a case study. The strengths and weaknesses of the presented method are analyzed and the applicability is discussed.  相似文献   

20.
We add new modality to image‐based visualization by converting ordinary photos into tangible images, which can be then haptically rendered. This is performed by interactive sketching haptic models on the photos so that the models match the image parts, which will become tangible. In contrast to common geometric modelling, we define the haptic models in a three‐dimensional haptic modelling space distorted by the central projection. Analytic FRep functions (variants of implicit functions) are mostly used for defining the haptic models. The tangible images thus created can realistically simulate some actual three‐dimensional scenes by implementing the principle “What You See Is What You Touch” while in fact still be 2D images. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号