首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Waller D  Knapp D  Hunt E 《Human factors》2001,43(1):147-158
Twenty-four people learned three versions of a room-sized maze: a wire-frame desktop virtual environment (VE), a normal surface-rendered desktop VE, and a real-world maze. Differences among the mental representations formed from each environment were measured with pointing and distance estimation tasks in a real-world version of each maze. People were more accurate at pointing after having learned the real and wire-frame VE maze than the surface-rendered VE maze; however, this effect was small compared with the effect of individual differences. Differences in gender, spatial ability, and prior computer experience were significantly related to the ability to acquire spatial information from the desktop VE. There was a high correlation between spatial knowledge when it was measured in the VE and spatial knowledge measured in the real world. Actual or potential applications include the design of effective VE training systems.  相似文献   

2.
Jones LA  Sarter NB 《Human factors》2008,50(1):90-111
OBJECTIVE: This article provides an overview of tactile displays. Its goal is to assist human factors practitioners in deciding when and how to employ the sense of touch for the purpose of information representation. The article also identifies important research needs in this area. BACKGROUND: First attempts to utilize the sense of touch as a medium for communication date back to the late 1950s. For the next 35 years progress in this area was relatively slow, but recent years have seen a surge in the interest and development of tactile displays and the integration of tactile signals in multimodal interfaces. A thorough understanding of the properties of this sensory channel and its interaction with other modalities is needed to ensure the effective and robust use of tactile displays. METHODS: First, an overview of vibrotactile perception is provided. Next, the design of tactile displays is discussed with respect to available technologies. The potential benefit of including tactile cues in multimodal interfaces is discussed. Finally, research needs in the area of tactile information presentation are highlighted. RESULTS: This review provides human factors researchers and interface designers with the requisite knowledge for creating effective tactile interfaces. It describes both potential benefits and limitations of this approach to information presentation. CONCLUSION: The sense of touch represents a promising means of supporting communication and coordination in human-human and human-machine systems. APPLICATION: Tactile interfaces can support numerous functions, including spatial orientation and guidance, attention management, and sensory substitution, in a wide range of domains.  相似文献   

3.
OBJECTIVES: This study sought to determine whether performance effects of cross-modal spatial links that were observed in earlier laboratory studies scale to more complex environments and need to be considered in multimodal interface design. It also revisits the unresolved issue of cross-modal cuing asymmetries. BACKGROUND: Previous laboratory studies employing simple cues, tasks, and/or targets have demonstrated that the efficiency of processing visual, auditory, and tactile stimuli is affected by the modality, lateralization, and timing of surrounding cues. Very few studies have investigated these cross-modal constraints in the context of more complex environments to determine whether they scale and how complexity affects the nature of cross-modal cuing asymmetries. METHOD: Amicroworld simulation of battlefield operations with a complex task set and meaningful visual, auditory, and tactile stimuli was used to investigate cuing effects for all cross-modal pairings. RESULTS: Significant asymmetric performance effects of cross-modal spatial links were observed. Auditory cues shortened response latencies for collocated visual targets but visual cues did not do the same for collocated auditory targets. Responses to contralateral (rather than ipsilateral) targets were faster for tactually cued auditory targets and each visual-tactile cue-target combination, suggesting an inhibition-of-return effect. CONCLUSIONS: The spatial relationships between multimodal cues and targets significantly affect target response times in complex environments. The performance effects of cross-modal links and the observed cross-modal cuing asymmetries need to be examined in more detail and considered in future interface design. APPLICATION: The findings from this study have implications for the design of multimodal and adaptive interfaces and for supporting attention management in complex, data-rich domains.  相似文献   

4.
Previous experiences on vestibular compensation showed that multisensorial stimulations affect postural unbalance recovery. Virtual Environment (VE) exposure seems very useful in vestibular rehabilitation, since the experience gained during VE exposure is transferable to the real world. The rearrangement of the hierarchy of the postural cues was evaluated in 105 patients affected by visual, labyrinthic and somatosensory pathology in normal conditions and during sensorial deprivation. They were divided into five groups according to pathology and compared to 50 normal controls. Our data show that VE exposure is a reliable method to identify the deficient subsystem and the level of substitution. Moreover, Virtual Reality (VR) would accelerate the compensation of an acute loss of labyrinthine function, related to adaptive modifications of the vestibulo-ocular and vestibulo-spinal reflexes, overstimulating the residual labyrinthine function. The residual labyrinthine function is poor in chronic bilateral vestibular deficit and VE exposure should provide sensory substitution or sensory motor reorganisation, thereby modulating the external spatial reference and promoting the reorganisation of the multiple sensory input. The potential for VE exposure perspectives seems very promising when dealing with the vestibular system where there is a continuous rearrangement of different sensorial informations as a result of environmental and age-related changes.  相似文献   

5.
Identification of metaphors for virtual environment training systems   总被引:2,自引:0,他引:2  
Stanney KM  Chen JL  Wedell B  Breaux R 《Ergonomics》2003,46(1-3):197-219
The objective of this effort was to develop potential metaphors for assisting wayfinding and navigation in current virtual environment (VE) training systems. Although VE purports a number of advantages over traditional, full-scale simulator training devices (deployability, footprint, cost, maintainability, scalability, networking), little design guidance exists beyond individual instantiations with specific platforms. A review of metaphors commonly incorporated into human-computer interactive systems indicated that existing metaphors have largely been used as orientation aids, mainly in the form of guided navigational assistance, with some position guidance. Advanced metaphor design concepts were identified that would not only provide trainees with a useful orienting framework but also enhance visual access and help differentiate an environment. The effectiveness of these concepts to aid navigation and wayfinding in VEs must be empirically validated.  相似文献   

6.
《Ergonomics》2012,55(1-3):197-219
The objective of this effort was to develop potential metaphors for assisting wayfinding and navigation in current virtual environment (VE) training systems. Although VE purports a number of advantages over traditional, full-scale simulator training devices (deployability, footprint, cost, maintainability, scalability, networking), little design guidance exists beyond individual instantiations with specific platforms. A review of metaphors commonly incorporated into human—computer interactive systems indicated that existing metaphors have largely been used as orientation aids, mainly in the form of guided navigational assistance, with some position guidance. Advanced metaphor design concepts were identified that would not only provide trainees with a useful orienting framework but also enhance visual access and help differentiate an environment. The effectiveness of these concepts to aid navigation and wayfinding in VEs must be empirically validated.  相似文献   

7.
Tactile and auditory cues have been suggested as methods of interruption management for busy visual environments. The current experiment examined attentional mechanisms by which cues might improve performance. The findings indicate that when interruptive tasks are presented in a spatially diverse task environment, the orienting function of tactile cues is a critical component, which directs attention to the location of the interruption, resulting in superior interruptive task performance. Non-directional tactile cues did not degrade primary task performance, but also did not improve performance on the secondary task. Similar results were found for auditory cues. The results support Posner and Peterson's [1990. The attention system of the human brain. Annual Review of Neuroscience 13, 25–42] theory of independent functional networks of attention, and have practical applications for systems design in work environments that consist of multiple, visual tasks and time-sensitive information.  相似文献   

8.
This paper considers tactile augmentation, the addition of a physical object within a virtual environment (VE) to provide haptic feedback. The resulting mixed reality environment is limited in terms of the ease with which changes can be made to the haptic properties of objects within it. Therefore sensory enhancements or illusions that make use of visual cues to alter the perceived hardness of a physical object allowing variation in haptic properties are considered. Experimental work demonstrates that a single physical surface can be made to ‘feel’ both softer and harder than it is in reality by the accompanying visual information presented. The strong impact visual cues have on the overall perception of object hardness, indicates haptic accuracy may not be essential for a realistic virtual experience. The experimental results are related specifically to the development of a VE for surgical training; however, the conclusions drawn are broadly applicable to the simulation of touch and the understanding of haptic perception within VEs.  相似文献   

9.
To examine the effect of stereo vision on performance, presence and oculomotor disturbances within a virtual environment (VE), two groups of 23 participants (good stereo acuity/low stereo acuity) were evaluated. Groups were matched in terms of gender, age and VE design factors (the latter were accounted for to ensure a similar VE experience between groups). Participants were immersed in a VE maze for up to 1h during which time they interacted with the environment while performing a number of stationary and movement-based tasks. Individuals with low stereo acuity traveled further to complete two tasks in the VE, yet performance time on these tasks was comparable to participants with good stereo acuity. Although participants with impaired stereo vision likely did not fully benefit from a stereoscopic view of the scene, they may have received sufficient depth information from movement-based cues to efficiently accomplish these tasks in a comparable amount of time. Overall performance, based on both the number of tasks completed and the total translational distance moved (based on input device movement) within the VE was not hindered for those with low stereo acuity. In addition, the expected increase in oculomotor disturbances for this group was not evident in this study, and both groups reported comparable amounts of presence from VE exposure. These results suggest that when head tracking is included as part of the VE experience (i.e., motion parallax cues exist), participants with low stereo acuity can be expected to perform comparable to normal sighted individuals, experience a comparable sense of presence, and report no increase in adverse effects when viewing scenes via stereoscopic displays. Thus, motion parallax cues may adequately provide a sense of depth within a VE, and alleviate theorized performance decrements for individuals with low stereo acuity. The results of this study have implications for those designing entertainment simulations or other such applications open to the general public, where people with low stereo acuity may routinely participate.  相似文献   

10.
The need to train people for increasingly complex tasks given diminishing budgets makes simulation-based training attractive in both the commercial and military sectors. Training systems that use virtual environment (VE) technology are potentially both reconfigurable and portable. The same equipment could provide a wide range of simulations at or near the trainees' worksite, letting them train close to home. To explore the potential of VE technology for naval training, the US Navy is sponsoring the Virtual Environment Technology for Training (VETT) program at three Boston sites. The VETT program experiments with VE systems employing visual and audio feedback and haptic interaction (i.e. tactile and kinesthetic/force interactions). The experiments extend from basic research into human psychophysics to the development of human/machine interfaces and computational systems, and applied training. The goal of integrating basic and applied research in this way is to determine the advantages and limitations of VE technology for training. The initial project focuses on naval officers, specifically the officer of the deck on a submarine  相似文献   

11.
Multimodal deep learning systems that employ multiple modalities like text, image, audio, video, etc., are showing better performance than individual modalities (i.e., unimodal) systems. Multimodal machine learning involves multiple aspects: representation, translation, alignment, fusion, and co-learning. In the current state of multimodal machine learning, the assumptions are that all modalities are present, aligned, and noiseless during training and testing time. However, in real-world tasks, typically, it is observed that one or more modalities are missing, noisy, lacking annotated data, have unreliable labels, and are scarce in training or testing, and or both. This challenge is addressed by a learning paradigm called multimodal co-learning. The modeling of a (resource-poor) modality is aided by exploiting knowledge from another (resource-rich) modality using the transfer of knowledge between modalities, including their representations and predictive models.Co-learning being an emerging area, there are no dedicated reviews explicitly focusing on all challenges addressed by co-learning. To that end, in this work, we provide a comprehensive survey on the emerging area of multimodal co-learning that has not been explored in its entirety yet. We review implementations that overcome one or more co-learning challenges without explicitly considering them as co-learning challenges. We present the comprehensive taxonomy of multimodal co-learning based on the challenges addressed by co-learning and associated implementations. The various techniques, including the latest ones, are reviewed along with some applications and datasets. Additionally, we review techniques that appear to be similar to multimodal co-learning and are being used primarily in unimodal or multi-view learning. The distinction between them is documented. Our final goal is to discuss challenges and perspectives and the important ideas and directions for future work that we hope will benefit for the entire research community focusing on this exciting domain.  相似文献   

12.
Lombardi  P. Zavidovique  B. Talbert  M. 《Computer》2006,39(12):57-61
Unmanned vehicles are an important evolutionary step for increased safety in a range of missions from passive observation to active exploration to aggressive proaction. To achieve this goal, these vehicles must be reliably autonomous, with effective context interpretation - the continuous understanding and monitoring of the external environment being an essential step on this path. In this approach, appropriate accounting for context should increase the system's perception performance by exploiting opportunistic cues in the visual context and leveraging domain-based anticipated cues from the exosystem. Espousing an emerging probabilistic approach to context monitoring in sensory-based vehicle control systems, we account for partial certainties in the operating environment external to the vision system's field of regard. Our results point to the promise of multimodal awareness as the key to improved vehicle control performance across a broad spectrum of future mission spaces  相似文献   

13.
The type of navigation interface in a virtual environment (VE)--head slaved or indirect--determines whether or not proprioceptive feedback stimuli are present during movement. In addition, teleports can be used, which do not provide continuous movement but, rather, discontinuously displace the viewpoint over large distances. A two-part experiment was performed. The first part investigated whether head-slaved navigation provides an advantage for spatial learning in a VE. The second part investigated the role of anticipation when using teleports. The results showed that head-slaved navigation has an advantage over indirect navigation for the acquisition of spatial knowledge in a VE. Anticipating the destination of the teleport prevented disorientation after the displacement to a great extent but not completely. The time that was needed for anticipation increased if the teleport involved a rotation of the viewing direction. This research shows the potential added value of using a head-slaved navigation interface--for example, when using VE for training purposes--and provides practical guidelines for the use of teleports in VE applications.  相似文献   

14.
The purpose of this study was to evaluate the impact of multimodal feedback on ergonomic measurements in a virtual environment (VE) for a typical simulated drilling task. In total, sixty male manufacturing industry workers were divided into five groups. One group performed the working task in a real environment (RE), and ergonomic measurements for this group were used as the baseline for evaluation. The other four groups performed the same task in a virtual environment with different feedback treatments (visual with or without auditory and/or tactile feedback). Five indices – task completion time, maximum force capacity reduction, body part discomfort, rated perceived exertion, and rated task difficulty – were used to evaluate the measurements of each of the four treatments in VE in comparison to the baseline group in RE. The results indicate that the five indices for each of the four treatment groups were significantly higher than those of the RE group. Moreover, the indices of the visual‐only group were significantly higher than those of the other three groups with auditory and/or tactile feedback treatments. The findings of this study can provide a guideline for ergonomic evaluations of work designs in VE and for establishing a virtual reality simulation system. © 2011 Wiley Periodicals, Inc.  相似文献   

15.
《Ergonomics》2012,55(5):692-700
Abstract

In this study, we examined how spatially informative auditory and tactile cues affected participants’ performance on a visual search task while they simultaneously performed a secondary auditory task. Visual search task performance was assessed via reaction time and accuracy. Tactile and auditory cues provided the approximate location of the visual target within the search display. The inclusion of tactile and auditory cues improved performance in comparison to the no-cue baseline conditions. In comparison to the no-cue conditions, both tactile and auditory cues resulted in faster response times in the visual search only (single task) and visual–auditory (dual-task) conditions. However, the effectiveness of auditory and tactile cueing for visual task accuracy was shown to be dependent on task-type condition. Crossmodal cueing remains a viable strategy for improving task performance without increasing attentional load within a singular sensory modality.

Practitioner Summary: Crossmodal cueing with dual-task performance has not been widely explored, yet has practical applications. We examined the effects of auditory and tactile crossmodal cues on visual search performance, with and without a secondary auditory task. Tactile cues aided visual search accuracy when also engaged in a secondary auditory task, whereas auditory cues did not.  相似文献   

16.
杨杰  WAIBEL Alex 《计算机学报》2000,23(12):1245-1252
1 IntroductionWe work,live,and play in the so- called infor-mation society where we communicate with peopleand information systems through diverse media inincreasingly varied environments.Human- humancommunication takes advantage of the many com-munication channels.People use verbal and non-verbal ways,such as speaking,pointing,gestur-ing,writing,fixating,and using facial expressionsand eye contact,to express ideas,intentions andfeelings.To build computer systems that operatemore flexibly and…  相似文献   

17.
This paper describes the design and development of a software tool for the evaluation and training of surgical residents using an interactive, immersive, virtual environment. Our objective was to develop a tool to evaluate user spatial reasoning skills and knowledge in a neuroanatomical context, as well as to augment their performance through interactivity. In the visualization, manually segmented anatomical surface images of MRI scans of the brain were rendered using a stereo display to improve depth cues. A magnetically tracked wand was used as a 3D input device for localization tasks within the brain. The movement of the wand was made to correspond to movement of a spherical cursor within the rendered scene, providing a reference for localization. Users can be tested on their ability to localize structures within the 3D scene, and their ability to place anatomical features at the appropriate locations within the rendering.  相似文献   

18.
《Ergonomics》2012,55(4):494-511
Virtual environments (VEs) are extensively used in training but there have been few rigorous scientific investigations of whether and how skills learned in a VE are transferred to the real world. This research aimed to measure and evaluate what is transferring from training a simple sensorimotor task in a VE to real world performance. In experiment 1, real world performances after virtual training, real training and no training were compared. Virtual and real training resulted in equivalent levels of post-training performance, both of which significantly exceeded task performance without training. Experiments 2 and 3 investigated whether virtual and real trained real world performances differed in their susceptibility to cognitive and motor interfering tasks (experiment 2) and in terms of spare attentional capacity to respond to stimuli and instructions which were not directly related to the task (experiment 3). The only significant difference found was that real task performance after training in a VE was less affected by concurrently performed interference tasks than was real task performance after training on the real task. This finding is discussed in terms of the cognitive load characteristics of virtual training. Virtual training therefore resulted in equivalent or even better real world performance than real training in this simple sensorimotor task, but this finding may not apply to other training tasks. Future research should be directed towards establishing a comprehensive knowledge of what is being transferred to real world performance in other tasks currently being trained in VEs and investigating the equivalence of virtual and real trained performances in these situations.  相似文献   

19.
Virtual environments (VEs) are extensively used in training but there have been few rigorous scientific investigations of whether and how skills learned in a VE are transferred to the real world. This research aimed to measure and evaluate what is transferring from training a simple sensorimotor task in a VE to real world performance. In experiment 1, real world performances after virtual training, real training and no training were compared. Virtual and real training resulted in equivalent levels of post-training performance, both of which significantly exceeded task performance without training. Experiments 2 and 3 investigated whether virtual and real trained real world performances differed in their susceptibility to cognitive and motor interfering tasks (experiment 2) and in terms of spare attentional capacity to respond to stimuli and instructions which were not directly related to the task (experiment 3). The only significant difference found was that real task performance after training in a VE was less affected by concurrently performed interference tasks than was real task performance after training on the real task. This finding is discussed in terms of the cognitive load characteristics of virtual training. Virtual training therefore resulted in equivalent or even better real world performance than real training in this simple sensorimotor task, but this finding may not apply to other training tasks. Future research should be directed towards establishing a comprehensive knowledge of what is being transferred to real world performance in other tasks currently being trained in VEs and investigating the equivalence of virtual and real trained performances in these situations.  相似文献   

20.
Navigating through large-scale virtual environments such as simulations of the astrophysical Universe is difficult. The huge spatial range of astronomical models and the dominance of empty space make it hard for users to travel across cosmological scales effectively, and the problem of wayfinding further impedes the user's ability to acquire reliable spatial knowledge of astronomical contexts. We introduce a new technique called the scalable world-in-miniature (WIM) map as a unifying interface to facilitate travel and wayfinding in a virtual environment spanning gigantic spatial scales: Power-law spatial scaling enables rapid and accurate transitions among widely separated regions; logarithmically mapped miniature spaces offer a global overview mode when the full context is too large; 3D landmarks represented in the WIM are enhanced by scale, positional, and directional cues to augment spatial context awareness; a series of navigation models are incorporated into the scalable WIM to improve the performance of travel tasks posed by the unique characteristics of virtual cosmic exploration. The scalable WIM user interface supports an improved physical navigation experience and assists pragmatic cognitive understanding of a visualization context that incorporates the features of large-scale astronomy.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号