首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 375 毫秒
1.
用于移动设备人机交互的眼动跟踪方法   总被引:1,自引:0,他引:1  
传统眼动跟踪设备构造复杂、体积和重量大,通常只能以桌面固定方式使用,无法支持普适计算环境下的移动式交互.为此,提出一种面向移动式交互的眼动跟踪方法,包括眼动图像处理、眼动特征检测、眼动数据计算和眼动交互应用4个层次.首先对眼球红外图像进行滤波、二值化处理,进而基于瞳孔-角膜反射法,结合二次定位和改进的椭圆拟合方法检测瞳孔,设计缩放因子驱动的模板匹配法检测普洱钦斑;在此基础上,计算注视点坐标等眼动数据;最后设计与开发了一个基于单个红外摄像头的头戴式眼动跟踪原型系统.实际用户测试结果表明,该系统舒适度适中,并具有较高精度和鲁棒性,从而验证了文中方法在移动式交互中的可行性和有效性.  相似文献   

2.
针对复杂虚拟现实场景中传统导航方法对用户支持不足和用户沉浸感较低等问题,文中提出了基于梯度提升迭代决策树的二分类模型,利用用户在虚拟现实环境中使用辅助导航时需要的眼动数据,来分析和预测用户在任务过程中是否需要辅助导航.根据用户的注视序列对该模型进行评估,得到用户需求判定方法的平均精确率和准确率分别为77.6%与77.2%.此外,文中借助所设计模型实现了一个导航辅助原型系统,通过对用户的眼动数据进行分类,来自动呈现导航辅助界面.实验结果表明,与传统的永久性辅助导航方法相比,新提出的自适应辅助导航具有更好的用户体验.  相似文献   

3.
为提高用户需求获取的准确性,提出一种基于眼动追踪的产品设计用户需求获取方法。通过设计眼动实验,分析人眼对不同风格产品形态观察时的运动轨迹并得出实验数据。从心理学角度研究眼动实验参数与用户消费心理的联系,选出首次注视持续时间、首次注视时间、注视总次数、平均瞳孔直径四项参数构建用户需求获取指标评判体系。使用综合赋权法确定参数的权重,加权各样本并比较,最终确定用户需求。以手机设计的用户需求获取为案例对该方法进行了验证。  相似文献   

4.
程时伟  沈哓权  孙凌云  胡屹凛 《软件学报》2019,30(10):3037-3053
随着数字图像处理技术的发展,以及计算机支持的协同工作研究的深入,眼动跟踪开始应用于多用户协同交互.但是已有的眼动跟踪技术主要针对单个用户,多用户眼动跟踪计算架构不成熟、标定过程复杂,眼动跟踪数据的记录、传输以及可视化共享机制都有待深入研究.为此,建立了基于梯度优化的协同标定模型,简化多用户的眼动跟踪标定过程;然后提出面向多用户的眼动跟踪计算架构,优化眼动跟踪数据的传输和管理.进一步地,探索眼动跟踪数据的可视化形式在协同交互环境下对用户视觉注意行为的影响,具体设计了圆点、散点、轨迹这3种可视化形式,并验证了圆点形式能够有效地提高多用户协同搜索任务的完成效率.在此基础上,设计与开发了基于眼动跟踪的代码协同审查系统,实现了代码审查过程中多用户眼动跟踪数据的同步记录、分发,以及基于实时注视点、代码行边框和背景灰度、代码行之间连线的可视化共享.用户实验结果表明,代码错误的平均搜索时间比没有眼动跟踪数据可视化分享时减少了20.1%,显著提高了协同工作效率,验证了该方法的有效性.  相似文献   

5.
孟刚  陈纾 《计算机仿真》2022,39(1):153-157
多通道视觉界面信息操作手势多样化,为满足不同用户需求,提出一种基于多重触控的多通道视觉界面信息传达方法.使用神经网络对原子手势建模,引入逻辑、时序和空间关系描述符分析组合手势架构,运用BP网络分类器检测原子手势,触发组合手势模型转移,实现多重触控手势识别;定义交互界面与程序主体功能不同路径,划分交互设备与信息处理过程,实现多通道信息整合与交互控制;最后将初始数据材料变换成信息模式,变量删除网络界面,把所有删除变量安置在删除树内组建连接树,将观测变量似然函数进行势更新,完成视觉界面信息传达全部过程.仿真结果表明,人机交互自然性有效提高,可为用户提供更加多元化网络服务.  相似文献   

6.
特征整合理论是视觉注意方面最有影响力的重要理论之一,为优化视觉沟通提供了理论基础.文中提出应用玫瑰曲线设计花形图像符号的方法,利用用户在前注意阶段的自动的、无意识的加工过程,引导用户觉察到目标信息,从而优化用户对信息可视化设计的视觉搜索过程.文中提出的图像符号在信息可视化的应用中能够映射6个变量,变量特征容易识别,并且便于表现2个变量间的比例关系.通过对2008年美国教育经费投入的案例研究,实践了基于明确信息搜索目标的信息可视化设计和探索性的信息可视化设计;通过眼动实验测试用户的首次注视时间和5 s的注视时间,探索了用户优先处理的视觉对象,提出了运用花形图像符号控制符号视觉特征相似性以优化用户视觉体验的方法.  相似文献   

7.
眼动交互在人机交互领域中有着广泛的应用前景,针对传统的眼动交互传感设备具有普遍侵入性,校准过程复杂且价格昂贵,普通单目摄像头传感器分辨率低等问题.提出一种基于前置摄像头视频源,使用方向梯度直方图(HOG)特征+SVM+LSTM神经网络的眼动行为识别方法,进而实现了简单的人机交互应用.该方法首先定位和跟踪人脸,在人脸对齐操作后依据4个眼角关键点的坐标获取双眼区域,使用SVM模型判断眼睛的睁闭眼及非眨眼状态,再分析相邻帧之间眼球中心的位置粗略判断眼动情况,将可疑的有意眼势帧间差分视频序列输入到LSTM网络中进行预测,输出眼动行为识别结果,进而触发计算机命令完成交互.经过自制数据样本集20 000个样本(其中约10%为负样本)测试,本文方法动态眨眼识别准确率优于95%,眼动行为预测准确率为99.3%.  相似文献   

8.
序列推荐系统可以根据用户和物品交互的时间序列信息,精确预测用户下一次交互物品.现有的序列推荐算法存在用户兴趣过渡拟合的问题,导致推荐内容同质化严重,从而无法实现个性化推荐.基于此,本文提出一种融合知识图谱与注意力机制的个性化序列推荐算法(SR-KGA):首先,引入知识图谱,通过图卷积网络对物品进行嵌入表示;其次,通过自注意力机制和多头注意力机制构建序列到序列(seq2seq)模型,最后,在损失函数中加入多样性正则项;实现用交互序列来预测未来可能交互的物品序列,从而进行推荐.通过在真实的数据集上实验,SR-KGA在保证推荐准确度的同时,提升了推荐列表的多样性,实现了用户个性化推荐.  相似文献   

9.
决策演化集是处理决策规则在时间序列上的演化问题的理论。决策演化集将着眼点从静态的决策信息系统转移到动态的时间序列上,研究决策信息系统在随着时间变化时的演化规律,是一种新的决策研究方法。目前,在决策演化集的标准结构下存在着一些问题,例如预测得到的属性较少,预测夹角偏大等问题。决策演化集的三支结构在提高预测准确度方面是一个有效的方法,但其阈值α和β是固定的。然而,在时间序列下数据是不停变化的,固定的α和β并不能很好地适应这种变化。利用博弈论的方法来调整修改α和β使其适应决策信息系统在时间序列下的变化,并通过实例来演示这种调整。  相似文献   

10.
人在阅读过程中的眼球运动具有一定规律,阅读眼动模型有助于人们更好地理解和认知这些规律。针对现有阅读眼动模型建模方法复杂的问题,突破传统阅读眼动模型注视粒度处理和回视处理模式,提出了基于单词的阅读眼动注视粒度处理模式和基于熟练读者的阅读眼动回视处理模式,利用阅读眼动注视序列标注与自然语言序列标注的高度相似性,形成了阅读眼动注视序列标注方法,从而把复杂的阅读眼动建模过程转化成了简单的语言序列标注过程,并使用最大熵马尔可夫模型实现了所提出的方法。实验结果表明,所提出的方法可以较好地描述阅读眼动任务,并且较易用机器学习模型进行实现。  相似文献   

11.
This pilot study explores the use of combining multiple data sources (subjective, physical, physiological, and eye tracking) in understanding user cost and behavior. Specifically, we show the efficacy of such objective measurements as heart rate variability (HRV), and pupillary response in evaluating user cost in game environments, along with subjective techniques, and investigate eye and hand behavior at various levels of user cost. In addition, a method for evaluating task performance at the micro-level is developed by combining eye and hand data. Four findings indicate the great potential value of combining multiple data sources to evaluate interaction: first, spectral analysis of HRV in the low frequency band shows significant sensitivity to changes in user cost, modulated by game difficulty—the result is consistent with subjective ratings, but pupillary response fails to accord with user cost in this game environment; second, eye saccades seem to be more sensitive to user cost changes than eye fixation number and duration, or scanpath length; third, a composite index based on eye and hand movements is developed, and it shows more sensitivity to user cost changes than a single eye or hand measurement; finally, timeline analysis of the ratio of eye fixations to mouse clicks demonstrates task performance changes and learning effects over time. We conclude that combining multiple data sources has a valuable role in human–computer interaction (HCI) evaluation and design.  相似文献   

12.
Human visual search plays an important role in many human–computer interaction (HCI) tasks. Better models of visual search are needed not just to predict overall performance outcomes, such as whether people will be able to find the information needed to complete an HCI task, but to understand the many human processes that interact in visual search, which will in turn inform the detailed design of better user interfaces. This article describes a detailed instantiation, in the form of a computational cognitive model, of a comprehensive theory of human visual processing known as “active vision” (Findlay & Gilchrist, 2003). The computational model is built using the Executive Process-Interactive Control cognitive architecture. Eye-tracking data from three experiments inform the development and validation of the model. The modeling asks—and at least partially answers—the four questions of active vision: (a) What can be perceived in a fixation? (b) When do the eyes move? (c) Where do the eyes move? (d) What information is integrated between eye movements? Answers include: (a) Items nearer the point of gaze are more likely to be perceived, and the visual features of objects are sometimes misidentified. (b) The eyes move after the fixated visual stimulus has been processed (i.e., has entered working memory). (c) The eyes tend to go to nearby objects. (d) Only the coarse spatial information of what has been fixated is likely maintained between fixations. The model developed to answer these questions has both scientific and practical value in that the model gives HCI researchers and practitioners a better understanding of how people visually interact with computers, and provides a theoretical foundation for predictive analysis tools that can predict aspects of that interaction.  相似文献   

13.
Design decision making is happened in every design node and iteration, and the expert decision-making bias and personal preference will ultimately affect the success or failure of the product reaching the market. In this paper, we try to predict the design decision making by investigating the relations between design decision making and subjects’ eye movements and Electroencephalogram(EEG) response. Four different methods were applied and compared to classify the different EEG features and two methods were used for EEG feature selection to correspond the design decision making results. In this study, the authors applied a multimodal fusion strategy for design decision making recognition where the authors used eye tracking and EEG response data as input dataset. According to the experiment results, the performance of the fusion strategy combined with EEG signals and eye movement characteristics is well in fitting the expert decision making results. The multimodal fusion combining eye tracking data and EEG has a strong potential to be a new design decision method to guide the design practice and provide supportive and objective data to reduce the effects of subjectivity, one-sidedness and superficiality in decision making. These results show that it is possible to create a classifier based on features extracted from eye movements and EEG response for the design decision making behaviour.  相似文献   

14.
Being knowledgeable and using the knowledge accurately is the basis for all development. Knowledge sheds light for the designers on the topics of correct decision making and early intervention while assessing the success of current products. The aesthetic value of a product is the fundamental point of communication between the designer and the user. Therefore, with the aim of accessing the hidden aesthetic taste in the users via the user, eye tracking technology was used as a tool since the sight has a great influence on one’s internal world. Within this scope, it was decided to work on a static product group, and 40 armchair variations were created as a sample. The eye movements on armchairs with variable stylistic features and the taste evaluations of 60 participants were monitored via the three different stages of the experiments. In line with the main objective of the study, significant and guiding outputs were obtained. The results showed that the taste related to the product can be understood via the eye with viewing metrics such as time to first fixation, first fixation duration, fixation count and duration, visit count and duration, and that the knowledge of view could affect the design decisions. On the other hand, a road map and a procedure were created for the purposes of accessing the implicit taste information for the design practice through using the eye tracking technology.  相似文献   

15.
Understanding how to induce Kansei (emotion or affect) in consumers through form is critical in product design and development. Conventional Kansei evaluations, which involve subjectively evaluating the overall form of a product, do not clarify the effects of the individual parts of a product on people’s Kansei evaluation. A microscale analysis of eye movement of people looking at product form may redeem this flaw in subjective evaluation. However, simultaneously recording eye movement when people making Kansei evaluation is challenging, previous studies have typically investigated either the relationship between form and eye movement or the relationship between form and Kansei separately. The eye movement of people while performing Kansei evaluations on product forms still has not been clarified. To address this issue, the present study used an eye tracking system to analyze the changes in the fixation points of people performing various Kansei evaluations. Twenty participants were recruited for 8 Kansei evaluations on the form of 16 chairs by using the semantic differential (SD) rating, while their eye movements on these evaluations were tracked simultaneously. Through factor analysis on the data of Kansei evaluations, two principal factors, valence (pleasure) and arousal, were extracted from the 8 Kansei scales to constitute a Kansei plane which is compatible to Russell’s circumplex model (plane) of affect By adopting the factor scores of the 16 chairs as coordinates, the 16 chairs were mapped into the Kansei plane. Further analysis on the eye fixation on the chairs located in this plane concluded the following results: (a) Pleasure had a more significant effect on the participants’ visual attention compared to arousal; the participants required more fixation points when evaluating the chair form that induced displeasure. (b) The participants typically fixated on two parts of the chairs during their Kansei evaluations, namely the seat and the backrest, indicating that seats and backrests are the two primary features people consider when evaluating chairs. The results clarify the effect of various Kansei on eye movements; thereby enable predicting people’s Kansei evaluations of product forms through analyzing their eye movement.  相似文献   

16.
Visual fixation on one's tool(s) takes much attention away from one's primary task. Following the belief that the best tools 'disappear' and become invisible to the user, we present a study comparing visual fixations (eye gaze within locations on a graphical display) and performance for mouse, pen, and physical slider user interfaces. Participants conducted a controlled, yet representative, color matching task that required user interaction representative of many data exploration tasks such as parameter exploration of medical or fuel cell data. We demonstrate that users may spend up to 95% fewer visual fixations on physical sliders versus standard mouse and pen tools without any loss in performance for a generalized visual performance task.  相似文献   

17.
《Ergonomics》2012,55(5-6):607-615
Abstract

The measurement system for quantitative analysis of eye movements and distribution of eye fixation points was developed through the study. Experiments on physiological fatigue characteristics of eye movements were studied using the system. The subjects involved in the study were six young males. No significant change was quantitatively found in saccadic eye movements during and/or after five hours of rapid eye tracking tasks. The saccadic velocity of two subjects were found in binocular decreased temporarily. The maximum velocity of eye movements obtained in the present experiment was ascertained in order to produce a scale for various visual work as an ergonomic index.  相似文献   

18.
The iCat is a user-interface robot with the ability to express a range of emotions through its facial features. This article summarizes our research to see whether we can increase the believability and likability of the iCat for its human partners through the application of gaze behaviour. Gaze behaviour serves several functions during social interaction such as mediating conversation flow, communicating emotional information and avoiding distraction by restricting visual input. There are several types of eye and head movements that are necessary for realizing these functions. We designed and evaluated a gaze behaviour system for the iCat robot that implements realistic models of the major types of eye and head movements found in living beings: vergence, vestibulo ocular reflexive, smooth pursuit movements and gaze shifts. We discuss how these models are integrated into the software environment of the iCat and can be used to create complex interaction scenarios. We report about some user tests and draw conclusions for future evaluation scenarios.  相似文献   

19.
Despite decades of studies on the link between eye movements and human cognitive processes, the exact nature of the link between eye movements and computer-based assessment performance still remains unknown. To bridge this gap, the present study investigates whether human eye movement dynamics can predict computer-based assessment performance (accuracy of response) in different presentation modalities (picture vs. text). Eye-tracking system was employed to collect 63 college students' eye movement behaviors while they are engaging in the computer-based physics concept questions presented as either pictures or text. Students' responses were collected immediately after the picture or text presentations in order to determine the accuracy of responses. The results demonstrated that students' eye movement behavior can successfully predict their computer-based assessment performance. Remarkably, the mean fixation duration has the greatest power to predict the likelihood of responding the correct physics concepts successfully, followed by re-reading time in proportion. Additionally, the mean saccade distance has the least and negative power to predict the likelihood of responding the physics concepts correctly in the picture presentation. Interestingly, pictorial presentations appear to convey physics concepts more quickly and efficiently than do textual presentations. This study adds empirical evidence of a prediction model between eye movement behaviors and successful cognitive performance. Moreover, it provides insight into the modality effects on students' computer-based assessment performance through the use of eye movement behavior evidence.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号