首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
孤独症谱系障碍(autism spectrum disorder,ASD)是一类以社会交流、刻板行为和狭隘兴趣为主要特征的神经发育障碍性疾病,致残率较高,严重影响着儿童的健康成长。ASD 主观临床诊断存在耗时长、主观性强等问题。因此,迫切需要一种快速、经济、有效的客观筛查方法。研究发现,ASD 儿童具有非典型的情绪视觉感知模式,有望将眼动追踪技术用于 ASD 的辅助诊断。该文提出一个在自然场景下,ASD 非典型情绪视觉感知模式结合机器学习的自动筛查 ASD 患者的模型。该模型可提取自然场景下感知情绪的眼动轨迹特征,通过机器学习模型进行建模,以实现根据眼动轨迹自动识别 ASD 患儿。实验结果表明,该方法的准确率为 79.71%,有望成为一种 ASD 儿童早期筛查的辅助工具。  相似文献   

2.
Eye movement modelling examples (EMME) are computer-based videos displaying the visualized eye gaze behaviour of a domain expert person (model) while carefully executing the learning or problem-solving task. The role of EMME in promoting cognitive performance (i.e., final scores of learning outcome or problem solving) has been questioned due to the mixed findings from empirical studies. This study tested the effects of EMME on attention guidance and cognitive performance by means of meta-analytic procedures. Data for both experimental and control groups and both posttest and pretest were extracted to calculate the effect sizes. The EMME group was treated as the experimental group and the non-EMME group was treated as the control group. Twenty-five independent articles were included. The overall analysis showed a significant effect of EMME on time to first fixation (d = −0.83), fixation duration (d = 0.74), as well as cognitive performance (d = 0.43), but not on fixation count, indicating that using EMME not only helped learners attend faster and longer to the task-relevant elements, but also fostered their final cognitive performance. Interestingly, task type significantly moderated the effect of EMME on cognitive performance. Moderation analyses showed that EMME was beneficial to learners' performance when non-procedural tasks (rather than procedural tasks) were used. These findings show contributions for future research as well as practical application in the field of computers and learning regarding videos displaying a model's visualized eye gaze behaviour.  相似文献   

3.
Of late there has been growing interest in the potential of technology to support children with Autistic Spectrum Disorders (ASD) with social and life skills. There has also been a burgeoning interest in the potential use of mobile technology in the classroom and in the use of such technology to support children with ASD. Building on these developments, the HANDS project has developed a mobile cognitive support application for smartphones, based on the principles of persuasive technology design, which supports children with ASD with social and life skills functioning - areas of ability which tend to be impaired in this population. The software application has been piloted in four special schools for children with ASD. This paper reports on a qualitative interpretivist evaluation, which explores which factors may mediate how the software application is incorporated in to existing practice and what influence it has on practice. Kairos is identified as a key factor, which is associated with the teachers’ view of the software application as extending their reach beyond the classroom. Design guidelines are proposed for future implementations of similarly purposed technology tools.  相似文献   

4.
Autism spectrum disorder (ASD) as a kind of mental disorder, has become an internationally recognized serious public health problem. Paintings of autistic children have not been compared systematically to those from Typically Developed (TD) children. In this work, we construct an ASD painting database which contains 478 paintings drawn by ASD individuals and 490 drawn by the TD group. Through subjective and objective analysis, some significant hallmarks, such as structuring logic, face, repetitive structure, composition location, edge completeness, etc. are found within the ASD paintings. We further train a classifier of ASD and TD painters using those extracted features, which shows encouraging accuracy as a potential screen tool for ASD. This work sheds light on understanding the uniqueness of autistic children through their paintings. The database will be released to the public.  相似文献   

5.
Sakai H  Shin D  Kohama T  Uchiyama Y 《Ergonomics》2012,55(7):743-751
Alerting drivers for self-regulation of attention might decrease crash risks attributable to absent-minded driving. However, no reliable method exists for monitoring driver attention. Therefore, we examined attentional effects on gaze preference for salient loci (GPS) in traffic scenes. In an active viewing (AV) condition requiring endogenous attention for traffic scene comprehension, participants identified appropriate speeds for driving in presented traffic scene images. In a passive viewing (PV) condition requiring no endogenous attention, participants passively viewed traffic scene images. GPS was quantified by the mean saliency value averaged across fixation locations. Results show that GPS was less during AV than during PV. Additionally, gaze dwell time on signboards was shorter for AV than for PV. These results suggest that, in the absence of endogenous attention for traffic scene comprehension, gaze tends to concentrate on irrelevant salient loci in a traffic environment. Therefore, increased GPS can indicate absent-minded driving. PRACTITIONER SUMMARY: The present study demonstrated that, without endogenous attention for traffic scene comprehension, gaze tends to concentrate on irrelevant salient loci in a traffic environment. This result suggests that increased gaze preference for salient loci indicates absent-minded driving, which is otherwise difficult to detect.  相似文献   

6.
This paper introduces the use of a visual attention model to improve the accuracy of gaze tracking systems. Visual attention models simulate the selective attention part of the human visual system. For instance, in a bottom‐up approach, a saliency map is defined for the image and gives an attention weight to every pixel of the image as a function of its colour, edge or intensity. Our algorithm uses an uncertainty window, defined by the gaze tracker accuracy, and located around the gaze point given by the tracker. Then, using a visual attention model, it searches for the most salient points, or objects, located inside this uncertainty window, and determines a novel, and hopefully, better gaze point. This combination of a gaze tracker together with a visual attention model is considered as the main contribution of the paper. We demonstrate the promising results of our method by presenting two experiments conducted in two different contexts: (1) a free exploration of a visually rich 3D virtual environment without a specific task, and (2) a video game based on gaze tracking involving a selection task. Our approach can be used to improve real‐time gaze tracking systems in many interactive 3D applications such as video games or virtual reality applications. The use of a visual attention model can be adapted to any gaze tracker and the visual attention model can also be adapted to the application in which it is used.  相似文献   

7.
We discuss an attentional model for simultaneous object tracking and recognition that is driven by gaze data. Motivated by theories of perception, the model consists of two interacting pathways, identity and control, intended to mirror the what and where pathways in neuroscience models. The identity pathway models object appearance and performs classification using deep (factored)-restricted Boltzmann machines. At each point in time, the observations consist of foveated images, with decaying resolution toward the periphery of the gaze. The control pathway models the location, orientation, scale, and speed of the attended object. The posterior distribution of these states is estimated with particle filtering. Deeper in the control pathway, we encounter an attentional mechanism that learns to select gazes so as to minimize tracking uncertainty. Unlike in our previous work, we introduce gaze selection strategies that operate in the presence of partial information and on a continuous action space. We show that a straightforward extension of the existing approach to the partial information setting results in poor performance, and we propose an alternative method based on modeling the reward surface as a gaussian process. This approach gives good performance in the presence of partial information and allows us to expand the action space from a small, discrete set of fixation points to a continuous domain.  相似文献   

8.
对注意资源的合理管理,以及在适当的时机进行注意切换是Agent的一项重要机能。现有的注意控制方法侧重于强调智能的认知方面,较少考虑智能的情感方面。该文提出一种基于情感和个性的注意控制模型,对情感机制和认知系统进行有机集成,以管理Agent的注意资源和切换注意等。仿真结果验证了模型的有效性,同时在情感机制有效调节下,Agent的行为表现出了不同的个性。  相似文献   

9.
Mind wandering is a ubiquitous phenomenon where attention involuntarily shifts from task-related thoughts to internal task-unrelated thoughts. Mind wandering can have negative effects on performance; hence, intelligent interfaces that detect mind wandering can improve performance by intervening and restoring attention to the current task. We investigated the use of eye gaze and contextual cues to automatically detect mind wandering during reading with a computer interface. Participants were pseudorandomly probed to report mind wandering while an eye tracker recorded their gaze during the reading task. Supervised machine learning techniques detected positive responses to mind wandering probes from eye gaze and context features in a user-independent fashion. Mind wandering was detected with an accuracy of 72 % (expected accuracy by chance was 60 %) when probed at the end of a page and an accuracy of 67 % (chance was 59 %) when probed in the midst of reading a page. Global gaze features (gaze patterns independent of content, such as fixation durations) were more effective than content-specific local gaze features. An analysis of the features revealed diagnostic patterns of eye gaze behavior during mind wandering: (1) certain types of fixations were longer; (2) reading times were longer than expected; (3) more words were skipped; and (4) there was a larger variability in pupil diameter. Finally, the automatically detected mind wandering rate correlated negatively with measures of learning and transfer even after controlling for prior knowledge, thereby providing evidence of predictive validity. Possible improvements to the detector and applications that utilize the detector are discussed.  相似文献   

10.
Traffic safety is directly related to the mental and physical condition of the driver. Performing regular secondary tasks while driving is an additional activity that dissipates attention and adds to the drivers' workload. Identifying driver fatigue and workload based on gaze behavior is one way to ensure a safe driving experience. The purpose of this paper is to classify and predict driving perceived workload using a set of eye-tracking metrics (gaze fixation, duration, pointing, and pupil diameter). The ability of eye-tracking metrics to predict driving workload has been investigated. As a result, frustration, performance, and temporal load showed a correlation with gaze metrics. Gaze point, duration, fixation, and pupil diameter significantly influence driving workload.Relevance to industry: Results will supply the specialists in eye-tracking/sensor technologies and traffic safety with new knowledge to improve the design of the driving performance and safety monitoring systems and efficiency of the driving process.  相似文献   

11.
Understanding the attentional behavior of the human visual system when visualizing a rendered 3D shape is of great importance for many computer graphics applications. Eye tracking remains the only solution to explore this complex cognitive mechanism. Unfortunately, despite the large number of studies dedicated to images and videos, only a few eye tracking experiments have been conducted using 3D shapes. Thus, potential factors that may influence the human gaze in the specific setting of 3D rendering, are still to be understood. In this work, we conduct two eye‐tracking experiments involving 3D shapes, with both static and time‐varying camera positions. We propose a method for mapping eye fixations (i.e., where humans gaze) onto the 3D shapes with the aim to produce a benchmark of 3D meshes with fixation density maps, which is publicly available. First, the collected data is used to study the influence of shape, camera position, material and illumination on visual attention. We find that material and lighting have a significant influence on attention, as well as the camera path in the case of dynamic scenes. Then, we compare the performance of four representative state‐of‐the‐art mesh saliency models in predicting ground‐truth fixations using two different metrics. We show that, even combined with a center‐bias model, the performance of 3D saliency algorithms remains poor at predicting human fixations. To explain their weaknesses, we provide a qualitative analysis of the main factors that attract human attention. We finally provide a comparison of human‐eye fixations and Schelling points and show that their correlation is weak.  相似文献   

12.

We investigate the use of commercial off-the-shelf (COTS) eye-trackers to automatically detect mind wandering—a phenomenon involving a shift in attention from task-related to task-unrelated thoughts—during computerized learning. Study 1 (N?=?135 high-school students) tested the feasibility of COTS eye tracking while students learn biology with an intelligent tutoring system called GuruTutor in their classroom. We could successfully track eye gaze in 75% (both eyes tracked) and 95% (one eye tracked) of the cases for 85% of the sessions where gaze was successfully recorded. In Study 2, we used this data to build automated student-independent detectors of mind wandering, obtaining accuracies (mind wandering F1?=?0.59) substantially better than chance (F1?=?0.24). Study 3 investigated context-generalizability of mind wandering detectors, finding that models trained on data collected in a controlled laboratory more successfully generalized to the classroom than the reverse. Study 4 investigated gaze- and video- based mind wandering detection, finding that gaze-based detection was superior and multimodal detection yielded an improvement in limited circumstances. We tested live mind wandering detection on a new sample of 39 students in Study 5 and found that detection accuracy (mind wandering F1?=?0.40) was considerably above chance (F1?=?0.24), albeit lower than offline detection accuracy from Study 1 (F1?=?0.59), a finding attributable to handling of missing data. We discuss our next steps towards developing gaze-based attention-aware learning technologies to increase engagement and learning by combating mind wandering in classroom contexts.

  相似文献   

13.
目的 高危孤独症谱系障碍(high-risk autism spectrum disorder,HR-ASD)筛查依赖于医师的临床评估和问卷量表,传统筛查方式效率低,亟需一种高效的自动筛查工具。为了满足自动筛查的需求,本文提出一种基于婴幼儿表情分析的孤独症谱系障碍自动筛查方法。方法 首先入组30例8~18个月的婴幼儿,包括10例ASD疑似患儿(HR-ASD)和20例正常发育婴幼儿;引入静止脸范式,并利用该范式诱发婴幼儿在社交压力条件下的情绪调节行为;提出一种面向婴幼儿视频表情识别的深度空时特征学习网络,首先在大规模公开数据集AffectNet预训练空域特征学习模型,然后在自建婴幼儿面部表情视频数据集RCLS&NBH+(Research Center of Learning Science&Nanjing Brain Hospital dataset+)上训练时空特征学习模型,从而建立一个较精准的婴幼儿表情识别模型;基于该模型深度特征序列的一阶统计量,构建婴幼儿社交压力环境下的表情行为症状与精神健康状态之间的关联,采用机器学习方法实现自动筛查。结果 1)基于婴幼儿表情人工标注的结果,发现:在1 min静止期,高危组的婴幼儿中性表情持续时长相对正常对照组偏高(p<0.01),而其他表情未发现有统计学意义的差异;2)提出的深度空时特征学习网络在本研究的30例婴幼儿面部表情视频数据集上的总体平均识别率达到了87.1%,3类表情预测结果与人工标注结果具有较高的一致性,其中Kappa一致性系数达到0.63,Pearson相关系数达到0.67;3)基于面部表情深度特征序列一阶统计量的精神健康状态预测性能达到灵敏度70%,特异性90%,分类正确率83.3%(置换检验p<0.05)。结论 本文提出的基于婴幼儿面部表情深度特征序列一阶统计量的精神健康状态自动预测模型是有效的,有助于实现高危孤独症谱系障碍自动筛查。  相似文献   

14.
Active tracking of foveated feature clusters using affine structure   总被引:6,自引:2,他引:4  
We describe a novel method of obtaining a fixation point on a moving object for a real-time gaze control system. The method makes use of a real-time implementation of a corner detector and tracker and reconstructs the image position of the desired fixation point from a cluster of corners detected on the object using the affine structure available from two or three views. The method is fast, reliable, viewpoint invariant, and insensitive to occlusion and/or individual corner dropout or reappearance. We compare two- and three-dimensional forms of the algorithm, present results for the method in use with a high performance head/eye platform, and compare the results with two naive fixation methods.  相似文献   

15.
The ability to follow the gaze of conspecifics is a critical component in the development of social behaviors, and many efforts have been directed to studying the earliest age at which it begins to develop in infants. Developmental and neurophysiological studies suggest that imitative learning takes place once gaze-following abilities are fully established and joint attention can support the shared behavior required by imitation. Accordingly, gaze-following acquisition should be precursory to most machine learning tasks, and imitation learning can be seen as the earliest modality for acquiring meaningful gaze shifts and for understanding the structural substrate of fixations. Indeed, if some early attentional process, based on a suitable combination of gaze shifts and fixations, could be learned by the robot, then several demonstration learning tasks would be dramatically simplified. In this paper, we describe a methodology for learning gaze shifts based on imitation of gaze following with a gaze machine, which we purposefully introduced to make the robot gaze imitation conspicuous. The machine allows the robot to share and imitate gaze shifts and fixations of a caregiver through a mutual vergence. This process is then suitably generalized by learning both the scene salient features toward which the gaze is directed and the way saccadic programming is attained. Salient features are modeled by a family of Gaussian mixtures. These together with learned transitions are generalized via hidden Markov models to account for humanlike gaze shifts allowing to discriminate salient locations.  相似文献   

16.
Spatial disorientation (SD) can lead to serious aviation accidents. To deal with this predicament, verbal reports (VR), a procedure that requires pilots to verbalize flight information during SD situations, are being carried out in the Republic of Korea Air Force. However, the impact of VR execution on visual attention under SD situations remains to be unexplored. Thus, the purpose of this study is to systematically and objectively analyze the effect of VR execution on pilot's visual attention across different SD illusion types by utilizing eye-tracking measures. The experiment was conducted on 25 male Air Force fighter pilots (14 in the VR group and 11 in the non-VR group) using a flight simulator and eye-tracking device. VR execution and areas of interest (AOIs) served as the independent variables while, eye-tracking metrics and a 7-point perceived attentional load scale served as the dependent variables. The pilots performed the flight task experiencing six types of illusion provoking SD scenarios in a single flight profile (15 min). Findings showed that the gaze distribution in the VR group tended to focus less on areas outside the AOIs than in the non-VR group in all SD scenarios. In addition, the fixation frequency in attitude related AOI on the head-up display for the Coriolis, false horizon, and graveyard spin illusions was significantly higher for the VR group than the non-VR group. On the other hand, there were no significant differences in terms of the perceived attentional load between VR and non-VR group. These suggest that VR execution can be recommended as a means to improve visual attention and to counteract SD effects.  相似文献   

17.
Investigation into robot-assisted intervention for children with autism spectrum disorder (ASD) has gained momentum in recent years. Therapists involved in interventions must overcome the communication impairments generally exhibited by children with ASD by adeptly inferring the affective cues of the children to adjust the intervention accordingly. Similarly, a robot must also be able to understand the affective needs of these children—an ability that the current robot-assisted ASD intervention systems lack—to achieve effective interaction that addresses the role of affective states in human–robot interaction and intervention practice. In this paper, we present a physiology-based affect-inference mechanism for robot-assisted intervention where the robot can detect the affective states of a child with ASD as discerned by a therapist and adapt its behaviors accordingly. This paper is the first step toward developing “understanding” robots for use in future ASD intervention. Experimental results with six children with ASD from a proof-of-concept experiment (i.e., a robot-based basketball game) are presented. The robot learned the individual liking level of each child with regard to the game configuration and selected appropriate behaviors to present the task at his/her preferred liking level. Results show that the robot automatically predicted individual liking level in real time with 81.1% accuracy. This is the first time, to our knowledge, that the affective states of children with ASD have been detected via a physiology-based affect recognition technique in real time. This is also the first time that the impact of affect-sensitive closed-loop interaction between a robot and a child with ASD has been demonstrated experimentally.   相似文献   

18.
Individuals with Autism Spectrum Disorders (ASD) frequently engage in stereotyped and repetitive motor movements. Automatically detecting these movements using comfortable, miniature wireless sensors could advance autism research and enable new intervention tools for the classroom that help children and their caregivers monitor, understand, and cope with this potentially problematic class of behavior. We present activity recognition results for stereotypical hand flapping and body rocking using accelerometer data collected wirelessly from six children with ASD repeatedly observed by experts in real classroom settings. An overall recognition accuracy of 88.6% (TP: 0.85; FP: 0.08) was achieved using three sensors. We also present pilot work in which non-experts use software on mobile phones to annotate stereotypical motor movements for classifier training. Preliminary results indicate that non-expert annotations for training can be as effective as expert annotations. Challenges encountered when applying machine learning to this domain, as well as implications for the development of real-time classroom interventions and research tools are discussed.  相似文献   

19.
The authors discuss three algorithms related to the blending of a single scene from multiple frames acquired from a space-variant sensor. Given a series of space-variant contour-based scenes with different fixation points, they show how to fuse these into a single multiscan view, which incorporates the information present in the individual scans. They demonstrate an (attentional) algorithm which recursively examines the current knowledge of the scene in order best to choose the next fixation point in terms of focusing attention on regions of maximum boundary curvature. They discuss a simple metric for evaluating convergence over a scan path. This may be used to compare the performance of various attentional algorithms. They discuss their work in light of both machine and biological vision  相似文献   

20.
Human-Robot Interaction (HRI) is a growing field of research that targets the development of robots which are easy to operate, more engaging and more entertaining. Natural human-like behavior is considered by many researchers as an important target of HRI. Research in Human-Human communications revealed that gaze control is one of the major interactive behaviors used by humans in close encounters. Human-like gaze control is then one of the important behaviors that a robot should have in order to provide natural interactions with human partners. To develop human-like natural gaze control that can integrate easily with other behaviors of the robot, a flexible robotic architecture is needed. Most robotic architectures available were developed with autonomous robots in mind. Although robots developed for HRI are usually autonomous, their autonomy is combined with interactivity, which adds more challenges on the design of the robotic architectures supporting them. This paper reports the development and evaluation of two gaze controllers using a new cross-platform robotic architecture for HRI applications called EICA (The Embodied Interactive Control Architecture), that was designed to meet those challenges emphasizing how low level attention focusing and action integration are implemented. Evaluation of the gaze controllers revealed human-like behavior in terms of mutual attention, gaze toward partner, and mutual gaze. The paper also reports a novel Floating Point Genetic Algorithm (FPGA) for learning the parameters of various processes of the gaze controller.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号