首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 51 毫秒
1.
We investigate the effectiveness of sonification (continuous auditory display) for supporting patient monitoring while reducing visual attentional workload in the operating theatre. Non-anaesthetist participants performed a simple continuous arithmetic task while monitoring the status of a simulated anaesthetised patient, reporting the status of vital signs when asked. Patient data were available either on a monitoring screen behind the participant, or were partially or completely sonified. Video captured when, how often and for how long participants turned to look at the screen. Participants gave the most accurate responses with visual displays, the fastest responses with sonification and the slowest responses when sonification was added to visual displays. A formative analysis identifying the constraints under which participants timeshare the arithmetic and monitoring tasks provided a context for interpreting the video data. It is evident from the pattern of their visual attention that participants are sensitive to events with different but overlapping temporal rhythms.  相似文献   

2.
Three experiments explored the effectiveness of continuous auditory displays, or sonifications, for conveying information about a simulated anesthetized patient's respiration. Experiment 1 established an effective respiratory sonification. Experiment 2 showed an effect of expertise in the use of respiratory sonification and revealed that some apparent differences in sonification effectiveness could be accounted for by response bias. Experiment 3 showed that sonification helps anesthesiologists to maintain high levels of awareness of the simulated patient's state while performing other tasks more effectively than when relying upon visual monitoring of the simulated patient state. Overall, sonification of patient physiology beyond traditional pulse oximetry appears to be a viable and useful adjunct to visual monitors. Actual and potential applications of this research include monitoring in a wide variety of busy critical care contexts.  相似文献   

3.
Eye tracking probes user's perception of real-time reaction to products, while conventional methods (i.e. interviews, focus group, questionnaires and so on) have generally failed because they depend on users' willingness and competency to describe how they feel when they are exposed to a product. Two tasks were designed to explore the indexes of eye movement that can reflect user experience of product, and analyse the attention captured by product attributes and goal-oriented. In task one, participants just browsed two smart phone pictures and evaluated the whole user experience. Binary choices were used in task two to ask participants to select the smart phone picture with higher user experience and then click the mouse. The results showed that in the browsing task, participants had shorter time to first fixation for the smart phone picture with higher level of user experience than the lower. And pupil dilated significantly when participants browse smart phone picture with lower level of user experience. In goal-oriented task, participants' attentions were dominated by visual perception of task driven, mainly reflected on longer fixation time and larger pupil diameter when looking at the smart phone with higher level of user experience. These results support the notion that we cannot assess product design just by several eye-movement indexes without considering the effects of visual attention mechanism.Relevance to industryThe appearance of product plays an important role to attract user's attention and stimulate their intention to experience. And vision is the main channel for users to obtain product information. Hence a thorough research on the inherent mechanism of vision perception can provide technical support for product designers, which in turn can attract more consumers to experience the product, even buy it. Moreover, the seller can find out the real buyers and predict their desired products by tracking user's eyes.  相似文献   

4.
ABSTRACT

Visual representations of data introduce several possible challenges for the human visual perception system in perceiving brightness levels. Overcoming these challenges might be simplified by adding sound to the representation. This is called sonification. As sonification provides additional information to the visual information, sonification could be useful in supporting the visual perception. In the present study, usefulness (in terms of accuracy and response time) of sonification was investigated with an interactive sonification test. In the test, participants were asked to identify the highest brightness level in a monochrome visual representation. The task was performed in four conditions, one with no sonification and three with different sonification settings. The results show that sonification is useful, as measured by higher task accuracy, and that the participant's musicality facilitates the use of sonification with better performance when sonification was used. The results were also supported by subjective measurements, where participants reported an experienced benefit of sonification.  相似文献   

5.
《Ergonomics》2012,55(5):692-700
Abstract

In this study, we examined how spatially informative auditory and tactile cues affected participants’ performance on a visual search task while they simultaneously performed a secondary auditory task. Visual search task performance was assessed via reaction time and accuracy. Tactile and auditory cues provided the approximate location of the visual target within the search display. The inclusion of tactile and auditory cues improved performance in comparison to the no-cue baseline conditions. In comparison to the no-cue conditions, both tactile and auditory cues resulted in faster response times in the visual search only (single task) and visual–auditory (dual-task) conditions. However, the effectiveness of auditory and tactile cueing for visual task accuracy was shown to be dependent on task-type condition. Crossmodal cueing remains a viable strategy for improving task performance without increasing attentional load within a singular sensory modality.

Practitioner Summary: Crossmodal cueing with dual-task performance has not been widely explored, yet has practical applications. We examined the effects of auditory and tactile crossmodal cues on visual search performance, with and without a secondary auditory task. Tactile cues aided visual search accuracy when also engaged in a secondary auditory task, whereas auditory cues did not.  相似文献   

6.
程时伟  沈哓权  孙凌云  胡屹凛 《软件学报》2019,30(10):3037-3053
随着数字图像处理技术的发展,以及计算机支持的协同工作研究的深入,眼动跟踪开始应用于多用户协同交互.但是已有的眼动跟踪技术主要针对单个用户,多用户眼动跟踪计算架构不成熟、标定过程复杂,眼动跟踪数据的记录、传输以及可视化共享机制都有待深入研究.为此,建立了基于梯度优化的协同标定模型,简化多用户的眼动跟踪标定过程;然后提出面向多用户的眼动跟踪计算架构,优化眼动跟踪数据的传输和管理.进一步地,探索眼动跟踪数据的可视化形式在协同交互环境下对用户视觉注意行为的影响,具体设计了圆点、散点、轨迹这3种可视化形式,并验证了圆点形式能够有效地提高多用户协同搜索任务的完成效率.在此基础上,设计与开发了基于眼动跟踪的代码协同审查系统,实现了代码审查过程中多用户眼动跟踪数据的同步记录、分发,以及基于实时注视点、代码行边框和背景灰度、代码行之间连线的可视化共享.用户实验结果表明,代码错误的平均搜索时间比没有眼动跟踪数据可视化分享时减少了20.1%,显著提高了协同工作效率,验证了该方法的有效性.  相似文献   

7.
可听化研究综述   总被引:6,自引:0,他引:6  
回顾了可听化研究的历史及现状,从可听化听觉感知研究、可听化工具以及可听化应用三个方面综述了可听化的研究内容及发展趋势.重点对人机交互中听觉感知相对于视觉感知的优势及其对数据一声音映射算法的选择的影响、对可听化工具研制及应用的相关技术等内容进行了概括。  相似文献   

8.
A vision of the future of intraoperative monitoring for anesthesia is presented-a multimodal world based on advanced sensing capabilities. I explore progress towards this vision, outlining the general nature of the anesthetist's monitoring task and the dangers of attentional capture. Research in attention indicates different kinds of attentional control, such as endogenous and exogenous orienting, which are critical to how awareness of patient state is maintained, but which may work differently across different modalities. Four kinds of medical monitoring displays are surveyed: (1) integrated visual displays, (2) head-mounted displays, (3) advanced auditory displays and (4) auditory alarms. Achievements and challenges in each area are outlined. In future research, we should focus more clearly on identifying anesthetists' information needs and we should develop models of attention in different modalities and across different modalities that are more capable of guiding design.  相似文献   

9.
Andrea Polli 《AI & Society》2012,27(2):257-268
In this article, the author will argue that the act of listening through public soundwalks and other formal and informal exercises builds environmental and social awareness and promotes changes in social and cultural practices. By examining the act of listening as an alternative pathway and comparing the research, writings, and creative work of leaders of the acoustic ecology movement (i.e., R. Murray Schafer, Hildegard Westerkamp, and Bernie Krause), the author hopes to shed light on these potentials. For purposes of comparison, projects that explore the sonification and audification of inaudible signals will be examined, including the work of Christina Kubisch. The process of audification and sonification of these signals will be examined in comparison to soundscape experiences in order to develop a theory of data sonification based on the soundscape. In order to build a community around the urban soundscape, in 2003, the author co-founded the New York Society of Acoustic Ecology. Through this endeavor, she co-created the ongoing NYSoundmap and Sound Seeker projects, which provide some practical research for this article. Thus, by comparing and contrasting theoretical writings with leading listening exercises, public soundwalks, soundscape-related brainstorming sessions, and presenting field recordings in various settings, new methodologies will be documented.  相似文献   

10.
In this paper, a real-time interactive system for smile detection and sonification using surface Electromyography (sEMG) signals is proposed. When a user smiles, a sound is played. The surface EMG signal is mapped to pitch using a conventional scale. The timbre of the sound is a synthetic sound that mimics bubbles.In a user testing of smiling tasks, 14 participants underwent the system and are required to produce smiles under three conditions, i.e., auditory feedback with sonification, visual feedback with mirror, and no feedback. The impression of the system is evaluated through questionnaires and interviews with the participants. In addition, we analyzed the total amount of muscular activity and temporal envelope patterns of the sEMG during smiling.The questionnaire and interview showed that users felt that (1) the sonification system well reflects their facial expressions, and (2) the sonification system was enjoyable. The users also expressed that the auditory feedback condition is easier to smile with, as compared to the visual feedback or no feedback conditions. However, the analysis of sEMG did not provide a quantitative difference among the three conditions, which is most likely due to the experiment design, which lacks socially engaging settings.  相似文献   

11.
Auditory interfaces can overcome visual interfaces when a primary task, such as driving, competes for the attention of a user controlling a device, such as radio. In emerging interfaces enabled by camera tracking, auditory displays may also provide viable alternatives to visual displays. This paper presents a user study of interoperable auditory and visual menus, in which control gestures remain the same in the visual and the auditory domain. Tested control methods included a novel free-hand gesture interaction with camera-based tracking, and touch screen interaction with a tablet. The task of the participants was to select numbers from a visual or an auditory menu including a circular layout and a numeric keypad layout. Results show, that even with participant's full attention to the task, the performance and accuracy of the auditory interface are the same or even slightly better than the visual when controlled with free-hand gestures. The auditory menu was measured to be slower in touch screen interaction, but questionnaire revealed that over half of the participants felt that the circular auditory menu was faster than the visual menu. Furthermore, visual and auditory feedback in touch screen interaction with numeric layout was measured fastest, touch screen with circular menu second fastest, and the free-hand gesture interface was slowest. The results suggest that auditory menus can potentially provide a fast and desirable interface to control devices with free-hand gestures.  相似文献   

12.
The combined effect of haptic and auditory feedback in shared interfaces on the cooperation between visually impaired and sighted persons is under-investigated. A central challenge for cooperating group members lies in obtaining a common understanding of the elements of the workspace and maintaining awareness of the other members' actions, as well as one's own, during the group work process. The aim of the experimental study presented here was to investigate if adding audio cues in a haptic and visual interface makes collaboration between a sighted and a blindfolded person more efficient. Results showed that task performance was significantly faster in the audio, haptic and visual feedback condition compared to the haptic and visual feedback condition. One special focus was also to study how participants utilize the auditory and haptic force feedback in order to obtain a common understanding of the workspace and to maintain an awareness of the group members' actions. Results from a qualitative analysis showed that the auditory and haptic feedback was used in a number of important ways to support the group members' action awareness and in the participants' grounding process.  相似文献   

13.
Drawing the user's gaze to an important item in an image or a graphical user interface is a common challenge. Usually, some form of highlighting is used, such as a clearly distinct color or a border around the item. Flicker can also be very salient, but is often perceived as annoying. In this paper, we explore high frequency flicker (60 to 72 Hz) to guide the user's attention in an image. At such high frequencies, the critical flicker frequency (CFF) threshold is reached, which makes the flicker appear to fuse into a stable signal. However, the CFF is not uniform across the visual field, but is higher in the peripheral vision at normal lighting conditions. Through experiments, we show that high frequency flicker can be easily detected by observers in the peripheral vision, but the signal is hardly visible in the foveal vision when users directly look at the flickering patch. We demonstrate that this property can be used to draw the user's attention to important image regions using a standard high refresh‐rate computer monitor with minimal visible modifications to the image. In an uncalibrated visual search task, users could in a crowded image easily spot the specified search targets flickering with very high frequency. They also reported that high frequency flicker was distracting when they had to attend to another region, while it was hardly noticeable when looking at the flickering region itself.  相似文献   

14.
The workload and stress associated with a 40-min vigilance task were examined under conditions wherein observers monitored an auditory or a visual display for changes in signal duration. Global workload scores fell in the midrange of the NASA Task Load Index, with scores on the Frustration subscale increasing linearly over time. These effects were unrelated to the sensory modality of signals. However, sensory modality was a significant moderator variable for stress. Observers became more stressed over time as indexed by responses to the Dundee Stress State Questionnaire, with evidence of recovery in the auditory but not the visual condition toward the end of the watch. This result and the finding that signal detection accuracy - although equated for difficulty under alerted conditions - favored the auditory mode, indicate that display modality and time on task should be considered carefully in the design of operations requiring sustained attention in order to enhance performance and reduce stress. Actual or potential applications of this research include domains in which monitoring is a crucial part, such as baggage screening, security operations, medical monitoring, and power plant operations.  相似文献   

15.
Tactile and auditory cues have been suggested as methods of interruption management for busy visual environments. The current experiment examined attentional mechanisms by which cues might improve performance. The findings indicate that when interruptive tasks are presented in a spatially diverse task environment, the orienting function of tactile cues is a critical component, which directs attention to the location of the interruption, resulting in superior interruptive task performance. Non-directional tactile cues did not degrade primary task performance, but also did not improve performance on the secondary task. Similar results were found for auditory cues. The results support Posner and Peterson's [1990. The attention system of the human brain. Annual Review of Neuroscience 13, 25–42] theory of independent functional networks of attention, and have practical applications for systems design in work environments that consist of multiple, visual tasks and time-sensitive information.  相似文献   

16.
The aim of the present study is to investigate interactions between vision and audition during a target acquisition task performed in a virtual environment. We measured the time taken to locate a visual target (acquisition time) signalled by auditory and/or visual cues in conditions of variable visual load. Visual load was increased by introducing a secondary visual task. The auditory cue was constructed using virtual three-dimensional (3D) sound techniques. The visual cue was constructed in the form of a 3D updating arrow. The results suggested that both auditory and visual cues reduced acquisition time as compared to an uncued condition. Whereas the visual cue elicited faster acquisition time than the auditory cue, the combination of the two cues produced the fastest acquisition time. The introduction of secondary visual task differentially affected acquisition time depending on cue modality. In conditions of high visual load, acquiring a target signalled by the auditory cue led to slower and more error-prone performance than acquiring a target signalled by either the visual cue alone or by both the visual and auditory cues.  相似文献   

17.
When mobile devices are used on the move, a user's limited visual resources are split between interacting with the mobile devices and maintaining awareness of the surrounding environment. In this study, we examined stylus-based tapping operations on a PDA under three mobility situations: seated, walking on a treadmill, and walking through an obstacle course. The results revealed that Fitts’ Law continues to be effective even under the most challenging obstacle course condition. While target selection times did not differ between the various mobility conditions, overall task completion times, error rates, and several measures of workload differed significantly. Diminished performance under the obstacle course condition was attributed to increased demands on attention associated with navigating through the obstacle course. Results showed that the participants in the obstacle course condition were able to tap on a 6.4 mm-diameter target with 90% accuracy, but they reduced their walking speed by 36% and perceived an increased workload. Extending earlier research, we found that treadmill-based conditions were able to generate representative data for task selection times, but accuracy differed significantly from the more realistic obstacle course condition.  相似文献   

18.
The overall quality of haptic user interfaces designed to support visually impaired students' science learning through sensorial feedback was systematically studied to investigate task performance and user behavior. Fourteen 6th- to 11th-grade students with visual impairments recruited from a state-funded blind school were asked to perform three main tasks (i.e., menu selection, structure exploration, and force recognition) using haptic user interfaces and a haptic device. This study used several dependent measures that are categorized into three types of variables: (a) task performance including success rate, workload, and task completion time; (b) user behavior defined as cursor movements proportionately represented from the user's cursor positional data; and (c) user preference. Results showed that interface type has significant effects on task performance, user behavior, and user preference, with varying degree of impact to participants with severe visual impairments performing the tasks. The results of this study as well as a set of refined design guidelines and principles should provide insights to the future research of haptic user interfaces that can be used when developing haptically enhanced science learning systems for the visually impaired.  相似文献   

19.
In the last years several solutions were proposed to support people with visual impairments or blindness during road crossing. These solutions focus on computer vision techniques for recognizing pedestrian crosswalks and computing their relative position from the user. Instead, this contribution addresses a different problem; the design of an auditory interface that can effectively guide the user during road crossing. Two original auditory guiding modes based on data sonification are presented and compared with a guiding mode based on speech messages.Experimental evaluation shows that there is no guiding mode that is best suited for all test subjects. The average time to align and cross is not significantly different among the three guiding modes, and test subjects distribute their preferences for the best guiding mode almost uniformly among the three solutions. From the experiments it also emerges that higher effort is necessary for decoding the sonified instructions if compared to the speech instructions, and that test subjects require frequent ‘hints’ (in the form of speech messages). Despite this, more than 2/3 of test subjects prefer one of the two guiding modes based on sonification. There are two main reasons for this: firstly, with speech messages it is harder to hear the sound of the environment, and secondly sonified messages convey information about the “quantity” of the expected movement.  相似文献   

20.
The public release of datasets on the internet by government agencies, environmental scientists, political groups and many other organizations has fostered a social practice of data visualization. The audiences have expectations of production values commensurate with their daily experience of professional visual media. At the same time, access to this data has allowed visual designers and artists to apply their skills to what was previously a field dominated by scientists and engineers. The ‘aesthetic turn’ in data visualization has sparked debates between the new wave and older more scientifically grounded schools of thought on the topic. Sonification is not as well known or commonly practiced as visualization. But sound is a naturally affective, aesthetic and cultural medium. The extension of the aesthetic turn to sonification could transform this field from a scientific curiosity and engineering instrument into a popular mass medium. This paper proposes that a design approach can facilitate an aesthetic turn in sonification that integrates aesthetics and functionality by dissolving divisions between scientific and artistic methods. The first section applies the design perspective to the definition of sonification by replacing the linguistic concept of representation with non-verbal concept of functionality. The next section describes applications of the TaDa design method that raised aesthetic issues particular to sonification practice. The final section proposes a pragmatic aesthetics that distinguishes sonification from the auditory sciences and sonic arts. A design perspective may lead to a future where the general public tunes into pop sonifications for listening enjoyment as well as useful information about the world.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号