首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 375 毫秒
1.
为了简化头戴式视线跟踪系统中的标定过程,提出一种无需添加任何硬件的、基于虹膜识别的一次标定方法.使用者只需在第一次使用头戴式视线跟踪系统时进行标定,再次使用时,系统会自动进行虹膜识别,调出使用者第一次标定时的眼部图像数据来计算当前眼部图像与标定时的眼部图像的相对偏转角和偏移量,得到使用者当前的标定参数.实验结果表明,该方法可以在不影响系统原有精度的条件下,省去使用者每次使用视线跟踪系统时都需要进行的烦琐的标定过程,大大简化了系统的标定,降低了系统的使用复杂度.  相似文献   

2.
设计了一种基于嵌入式的头戴式视线追踪控制系统.它将实际运动物体作为控制对象,以人眼为控制源,采用图像处理算法和坐标映射模型,实现视线追踪技术.通过质心检测和椭圆拟合算法的比较与优化,系统定位瞳孔中心的角度误差精确到1.4°以内,再利用最小二乘法进行视线空间坐标变换,最终控制运动对象移动到注视点.实验结果表明,人眼距离被控对象2 m内时,被控对象的定点误差在5 cm内,系统响应时间在0.3 s内,满足用户日常操作要求.  相似文献   

3.
为了实现头部自由运动下的屏幕凝视点准确估计,设计一套基于场景摄像机和屏幕四角红外灯的头戴式视线跟踪系统。利用投影空间的不变量cross-ratio将凝视点在参考屏幕上的位置转化为当前屏幕上的实际位置,并在场景摄像机坐标系下对头部转动造成的视觉误差进行补偿。实验结果表明,该方法可以将误差控制在一个较小的可接受的范围内,同时避免了复杂的多摄像机装置和立体匹配计算。  相似文献   

4.
一种新的基于瞳孔-角膜反射技术的视线追踪方法   总被引:4,自引:0,他引:4  
针对现有单相机单光源视线追踪系统存在的几个问题:精度不高、头动受限以及标定复杂,提出了一种新的基于瞳孔-角膜反射(PCCR)技术的视线追踪方法.通过提出的瞳孔边缘滤波算法(RDPEF)和三通道伪彩色图(TCPCM)解决了近红外条件下瞳孔定位误差较大、瞳孔跟踪鲁棒性较差的问题,进而提高了视线特征提取的精度.通过提出的头部位置补偿方法以及个体差异转化模型,使二维映射模型允许使用者头部运动并且只需要单点标定.该方法提高了单相机视线追踪的精度和应用范围,为面向人机交互的视线追踪系统提供了有效的低成本解决方案.  相似文献   

5.
传统的头戴式视线跟踪系统需要借助额外的头部位置跟踪器或其他辅助设备才能定位视线方向.针对该问题,提出一种基于标记点检测的注视点估计方法.该方法通过计算机视觉的方法检测标记点,建立场景图像与真实场景中计算机屏幕之间的空间关系,将场景图像中的注视点坐标映射到计算机屏幕中.实验结果表明,该方法简单易行,可以较好地估计出用户在...  相似文献   

6.
针对传统高速移动视点视频监控跟踪系统视频信号收集能力差,追踪准确率低,课题在区块链技术基础上设计了一种新的视点视频监控跟踪系统。系统硬件设计分为视频监视模块、数据定位模块以及视频监控跟踪模块三个模块进行研究操作,在视频监控跟踪模块中根据硬件元件结构与性质对其进行系统掌控,辅助BDL9830QD监视器强化内部系统监视功能,时刻保持系统中心监控操作,在数据定位模块中,综合数据所处状态,选择数据微型定位器对信号进行追踪定位,标定定位目标,调整数据状态,在视频监控跟踪模块中选用HX-YT01 自动视频追踪器加大对数据信号的跟踪力度,实现精准化监控跟踪,由此完成系统硬件设计。在系统应用程序设计中综合硬件元件特点进行程序改造,构建区块链空间,实现对系统的整体设计。实验结果表明,基于区块链技术的高速移动视点视频监控跟踪系统的追踪能力提高了15.21%,追踪结果准确率提高了22.8%。该设计能够在较高程度上强化系统监控跟踪性能,同时缩减系统操作所需时间,提升整体系统运行效率,能够更好的为使用者提供优质理论研究操作。  相似文献   

7.
赵昕晨  杨楠 《计算机应用》2020,40(11):3295-3299
实时视线跟踪技术是智能眼动操作系统的关键技术。与基于眼动仪的技术相比,基于网络摄像头的技术具有低成本、高通用性等优点。针对现有的基于摄像头的算法只考虑眼部图像特征、准确度较低的问题,提出引入头部姿态分析的视线追踪算法优化技术。首先,通过人脸特征点检测结果构建头部姿态特征,为标定数据提供头部姿态上下文;然后,研究了新的相似度算法,计算头部姿态上下文的相似度;最后,在进行视线追踪时,利用头部姿态相似度对校准数据进行过滤,从标定数据集中选取与当前输入帧头部姿态相似度较高的数据进行预测。在选取不同特征人群的数据上进行了大量实验,对比实验结果显示,与WebGazer相比,所提算法的平均误差降低了58~63 px。所提算法能有效提高追踪结果的准确性和稳定性,拓展了摄像头设备在视线追踪领域的应用场景。  相似文献   

8.
赵昕晨  杨楠 《计算机应用》2005,40(11):3295-3299
实时视线跟踪技术是智能眼动操作系统的关键技术。与基于眼动仪的技术相比,基于网络摄像头的技术具有低成本、高通用性等优点。针对现有的基于摄像头的算法只考虑眼部图像特征、准确度较低的问题,提出引入头部姿态分析的视线追踪算法优化技术。首先,通过人脸特征点检测结果构建头部姿态特征,为标定数据提供头部姿态上下文;然后,研究了新的相似度算法,计算头部姿态上下文的相似度;最后,在进行视线追踪时,利用头部姿态相似度对校准数据进行过滤,从标定数据集中选取与当前输入帧头部姿态相似度较高的数据进行预测。在选取不同特征人群的数据上进行了大量实验,对比实验结果显示,与WebGazer相比,所提算法的平均误差降低了58~63 px。所提算法能有效提高追踪结果的准确性和稳定性,拓展了摄像头设备在视线追踪领域的应用场景。  相似文献   

9.
基于特征的视线跟踪方法研究综述   总被引:2,自引:0,他引:2  
针对基于特征的视线跟踪方法进行了综述.首先对视线跟踪技术的发展、相关研究工作和研究现状进行了阐述; 然后将基于特征的视线跟踪方法分成了两大类:二维视线跟踪方法和三维视线跟踪方法, 从硬件系统配置、误差主要来源、头部运动影响、优缺点等多个方面重点分析了这两类视线跟踪方法, 对近五年现有的部分基于特征的视线跟踪方法进行了对比分析, 并对二维视线跟踪系统和三维视线跟踪系统中的几个关键问题进行了探讨; 此外, 介绍了视线跟踪技术在人机交互、医学、军事、智能交通等多个领域的应用; 最后对基于特征的视线跟踪方法的发展趋势和研究热点进行了总结与展望.  相似文献   

10.
在回顾视线追踪技术发展历程的基础上,对该技术的研究方向和几种主要的视线跟踪方法进行了简单阐述。重点介绍了基于瞳孔-角膜反射法的视线追踪技术的原理及其硬件组成,尤其对现有视线跟踪系统中比较成熟的注视点估计算法进行了归纳总结和原理剖析。对二维和三维的注视点估计算法的精度和用户自由度进行了进一步的横向比较。最后指出了视线追踪技术存在的缺陷,并对其在人机交互、智能机器、虚拟现实等领域的应用前景进行了展望。  相似文献   

11.
程时伟  沈哓权  孙凌云  胡屹凛 《软件学报》2019,30(10):3037-3053
随着数字图像处理技术的发展,以及计算机支持的协同工作研究的深入,眼动跟踪开始应用于多用户协同交互.但是已有的眼动跟踪技术主要针对单个用户,多用户眼动跟踪计算架构不成熟、标定过程复杂,眼动跟踪数据的记录、传输以及可视化共享机制都有待深入研究.为此,建立了基于梯度优化的协同标定模型,简化多用户的眼动跟踪标定过程;然后提出面向多用户的眼动跟踪计算架构,优化眼动跟踪数据的传输和管理.进一步地,探索眼动跟踪数据的可视化形式在协同交互环境下对用户视觉注意行为的影响,具体设计了圆点、散点、轨迹这3种可视化形式,并验证了圆点形式能够有效地提高多用户协同搜索任务的完成效率.在此基础上,设计与开发了基于眼动跟踪的代码协同审查系统,实现了代码审查过程中多用户眼动跟踪数据的同步记录、分发,以及基于实时注视点、代码行边框和背景灰度、代码行之间连线的可视化共享.用户实验结果表明,代码错误的平均搜索时间比没有眼动跟踪数据可视化分享时减少了20.1%,显著提高了协同工作效率,验证了该方法的有效性.  相似文献   

12.
Performing typical network tasks such as node scanning and path tracing can be difficult in large and dense graphs. To alleviate this problem we use eye‐tracking as an interactive input to detect tasks that users intend to perform and then produce unobtrusive visual changes that support these tasks. First, we introduce a novel fovea based filtering that dims out edges with endpoints far removed from a user's view focus. Second, we highlight edges that are being traced at any given moment or have been the focus of recent attention. Third, we track recently viewed nodes and increase the saliency of their neighborhoods. All visual responses are unobtrusive and easily ignored to avoid unintentional distraction and to account for the imprecise and low‐resolution nature of eye‐tracking. We also introduce a novel gaze‐correction approach that relies on knowledge about the network layout to reduce eye‐tracking error. Finally, we present results from a controlled user study showing that our methods led to a statistically significant accuracy improvement in one of two network tasks and that our gaze‐correction algorithm enables more accurate eye‐tracking interaction.  相似文献   

13.
Eye gaze tracking is very useful for quantitatively measuring visual attention in virtual environments. However, most eye trackers have a limited tracking range, e.g., ±35° in the horizontal direction. This paper proposed a method to combine head pose tracking and eye gaze tracking together to achieve a large range of tracking in virtual driving simulation environments. Multiple parallel multilayer perceptrons were used to reconstruct the relationship between head images and head poses. Head images were represented with the coefficients extracted from Principal Component Analysis. Eye gaze tracking provides precise results on the front view, while head pose tracking is more suitable for tracking areas of interest than for tracking points of interest on the side view.  相似文献   

14.
作为信息获取与人机交互的一种新型方式,视线跟踪技术已经成为计算机视觉领域的热门研究方向。视线跟踪的核心技术是视线估计。针对现有视线估计方法标定复杂、限制头部运动等问题,提出了一种改进的基于二维瞳孔角膜反射技术的视线估计方法。在单相机单光源条件下,通过建立瞳孔角膜反射模型、补偿个体差异误差、补偿头部运动误差等步骤实现单点标定视线估计。实验结果表明,用该算法估计视线,在一定范围内,头部移动不会带来精度的明显下降。  相似文献   

15.
The aim of this study is to develop and evaluate an efficient camera calibration method for vision-based head tracking. Tracking head movements is important in the design of an eye-controlled human/computer interface. A vision-based head tracking system is proposed to allow the user's head movements in the design of the eye-controlled human/computer interface. We propose an efficient camera calibration method to track the three-dimensional position and orientation of the user's head accurately. We also evaluate the performance of the proposed method and the influence of the configuration of calibration points on the performance. The experimental error analysis results showed that the proposed method can provide more accurate and stable pose (i.e. position and orientation) of the camera than the direct linear transformation method which has been used in camera calibration. The results for this study can be applied to the tracking of head movements related to the eye-controlled human/computer interface and the virtual reality technology.  相似文献   

16.
Eye contact and gaze awareness play a significant role for conveying emotions and intentions during face-to-face conversation. Humans can perceive each other's gaze quite naturally and accurately. However, the gaze awareness/perception are ambiguous during video teleconferencing performed by computer-based devices (such as laptops, tablet, and smart-phones). The reasons for this ambiguity are the (i) camera position relative to the screen and (ii) 2D rendition of 3D human face i.e., the 2D screen is unable to deliver an accurate gaze during video teleconferencing. To solve this problem, researchers have proposed different hardware setups with complex software algorithms. The most recent solution for accurate gaze perception employs 3D interfaces, such as 3D screens and 3D face-masks. However, today commonly used video teleconferencing devices are smart devices with 2D screens. Therefore, there is a need to improve gaze awareness/perception in these smart devices. In this work, we have revisited the question; how to improve a remote user's gaze awareness among his/her collaborators. Our hypothesis is that ‘an accurate gaze perception can be achieved by the3D embodimentof a remote user's head gesture during video teleconferencing’. We have prototyped an embodied telepresence system (ETS) for the 3D embodiment of a remote user's head. Our ETS is based on a 3-DOF neck robot with a mounted smart device (tablet PC). The electromechanical platform in combination with a smart device is a novel setup that is used for studying gaze awareness/perception in 2D screen-based smart devices during video teleconferencing. Two important gaze-related issues are considered in this work; namely (i) ‘Mona-Lisa Gaze Effect’ – the gaze is always directed at the person independent of his position in the room, and (ii) ‘Gaze Awareness/Faithfulness’ – the ability to perceive an accurate spatial relationship between the observing person and the object by an actor. Our results confirm that the 3D embodiment of a remote user head not only mitigates the Mona Lisa gaze effect but also supports three levels of gaze faithfulness, hence, accurately projecting the human gaze in distant space.  相似文献   

17.
This paper describes the development of auto‐stereoscopic three‐dimensional (3D) display with an eye‐tracking system for not only the X‐axis (right–left) and Y‐axis (up–down) plane directions but also the Z‐axis (forward–backward) direction. In the past, the eye‐tracking 3D system for the XY‐axes plane directions that we had developed had a narrow 3D viewing space in the Z‐axis direction because of occurrence of 3D crosstalk variation on screen. The 3D crosstalk variation on screen was occurred when the viewer's eye position moved back and forth along the Z‐axis direction. The reason was that the liquid crystal (LC) barrier pitch was fixed and the LC barrier was able to control the only barrier aperture position. To solve this problem, we developed the LC barrier that is able to control the barrier pitch as well as the barrier aperture position in real time, corresponding to the viewer's eye position. As a result, the 3D viewing space has achieved to expand up to 320–1016 mm from the display surface in the Z‐axis direction and within a range of ±267 mm in the X‐axis direction. In terms of the Y‐axis direction, the viewing space is not necessary to be considered, because of a stripe‐shaped parallax barrier.  相似文献   

18.
Instructor behaviour is known to affect learning performance, but it is unclear which specific instructor behaviours can optimize learning. We used eye‐tracking technology and questionnaires to test whether the instructor's gaze guidance affected learners' visual attention, social presence, and learning performance, using four video lectures: declarative knowledge with and without the instructor's gaze guidance and procedural knowledge with and without the instructor's gaze guidance. The results showed that the instructor's gaze guidance not only guided learners to allocate more visual attention to corresponding learning content but also increased learners' sense of social presence and learning. Furthermore, the link between the instructor's gaze guidance and better learning was especially strong for participants with a high sense of social connection with the instructor when they learned procedural knowledge. The findings lead to a strong recommendation for educational practitioners: Instructors should provide gaze guidance in video lectures for better learning performance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号