共查询到20条相似文献,搜索用时 156 毫秒
1.
针对人和机器人在协同工作的过程中,机器人需要对伙伴进行准确的目标定位这个问题,设计了一个以ARM Cotex-A9 i. MX6Q为微处理器处理模块,USB摄像头为采集模块的嵌入式视觉测距系统,实现了单目视觉测距定位。该系统设计大体分为人眼瞳孔定位和单目测距2部分。人眼瞳孔定位通过USB摄像头采集人脸图像,系统对人脸图像进行处理,获取人脸图像中瞳孔的精确位置,获取人脸图像中瞳孔的位置采用Adboost人脸检测算法和灰度投影人眼定位算法;测距采用小孔成像三角形相似原理,根据实际人眼瞳距和图像中人眼瞳距信息计算出目标人物到摄像机镜头间的距离,其中摄像机的内参数标定借助于Matlab工具箱来完成。实验结果表明,该测距定位系统测距精度较高,系统鲁棒性好,具有良好的应用前景。 相似文献
2.
3.
针对传统积分投影方法易受眉毛、睫毛、阴影、遮挡及噪声等干扰的问题,提出了一种梯度积分投影与最大期望(EM)算法相结合的人眼精确定位方法,可以在人脸图像中分割出人眼区域,并精确定位人眼位置。首先,采用一种新的梯度算子计算人脸图像的行梯度积分投影粗略定位人眼区域;然后计算人眼区域的列梯度积分投影函数,用EM算法将所得列梯度积分投影函数曲线拟合成两个高斯曲线,并根据高斯曲线精确分割出人眼窗口;最后,利用我们提出的加权质心法在所得人眼窗口中精确定位双眼位置。在YaleB人脸数据库及自采数据库上的实验结果表明,本文方法不易受眉毛及噪声干扰,并能有效克服眼睑和睫毛的遮挡,对不同光照条件及头部姿态都有很好的鲁棒性。 相似文献
4.
5.
针对传统灰度投影方法抗干扰能力较差的弱点,提出了一种基于区域投影的人眼精确定位方法.考虑到投影过程中的二维特性,在水平和垂直方向将眼睛图像分成不相重叠的区域,分别将各区域内的灰度值投影获得瞳孔的候选区域并将其扩展获得瞳孔窗口,利用灰度特性通过边界跟踪的方法实现了对瞳孔中心的精确定位.给出了人眼定位精确度判定准则,采用C... 相似文献
6.
7.
针对人眼检测过程中存在的表情、光照和眼镜遮挡 等干扰因素的影响,提出一种基于Gabor滤波和 K-medoid聚类分析的人眼检测和瞳孔定位方法。首先采用鲁棒性较好的眼部横向特征作为 检测对象来设计了Gabor 滤波器,以突出眼部的横向特征;然后根据Gabor滤波后的眼部特征,并结合K-medoid算法 设计了聚类算法检测人眼;在人眼检测基础上,结合灰度分布特征和熵函数设计了瞳孔定 位方法。在BioID人脸库和FERET彩色人脸 库上进行了实验,结果表明,本文方法在两个图库的3470幅人脸图像 上能够达到97.8%的检 测率,并且在设置误差阈值较小(0.15)情况下仍能达到95.5%的瞳孔定位准确率。 相似文献
8.
为提高虹膜的定位速度,提出一种粗定位与精定位相结合的虹膜快速定位算法.首先,利用阈值对人眼图像进行分割提取瞳孔,对二值化瞳孔区域进行形态学开元算去除瞳孔区域外睫毛等噪声点;然后对瞳孔区域进行直线行扫描提取瞳孔边界点,并利用边界点进行最小二乘拟合粗略定位内边缘;最后利用圆梯度算子对虹膜内外边缘进行精确定位.对CASIA(version 1.0)虹膜数据库中100多幅虹膜图像进行定位实验,所提算法的平均耗费时间为1.38s,圆梯度算子耗时9.8s,Hough变换方法耗时14.3s.定位结果表明文中算法对不同质量的虹膜图像定位速度快,精度高,鲁棒性强. 相似文献
9.
针对民用航空器飞行中,实时监控机组座舱行为的疲劳检测系统的快速、精确、低计算量的需求,提出了一种基于Hough、梯度直方图结合的算法:利用Adaboost算法检测人脸之后,运用Hough变换进行人眼初检测,再利用提出的简化梯度直方图特征进行人眼的精确定位。算法将目标区域的简化梯度直方图特征送入SVM分类器,对Hough检测后存在的多个"可能圆"进行筛选,剔除Hough检测中的非人眼圆。实验对比结果表明,改进的人眼定位结合算法能够快速的检测人眼,同时粗定位缩小了Hough圆检测的范围,减小了计算量,缩短了算法运行时间,SVM和改进的梯度信息提高了定位的准确率,并且检测结果稳定、鲁棒性好。 相似文献
10.
11.
基于红外电视法的眼睛盯视人机交互技术 总被引:6,自引:0,他引:6
介绍基于红外电视法的人机交互技术的工作原理,系统构成,视线方向的检测方法及应用领域,并用图像分析法来检测与判断视线方向,提出了对于使用过程中头部微小变动时视线方向的修正方法,同时阐述了通过眼睛盯视实现对计算机外设的控制。 相似文献
12.
Very few attempts, if any, have been made to use visible light in corneal reflection approaches to the problem of gaze tracking. The reasons usually given to justify the limited application of this type of illumination are that the required image features are less accurately depicted, and that visible light may disturb the user. The aim of this paper is to show that it is possible to overcome these difficulties and build an accurate and robust gaze tracker under these circumstances. For this purpose, visible light is used to obtain the corneal reflection or glint in a way analogous to the well-known pupil center corneal reflection technique. Due to the lack of contrast, the center of the iris is tracked instead of the center of the pupil. The experiments performed in our laboratory have shown very satisfactory results, allowing free-head movement and no need of recalibration. 相似文献
13.
针对自适应人机界面对用户行为意图预测的需求,提出一种基于眼动特征的人机交互行为分类及意图预测方法.通过建立简化的界面模型,将用户的行为意图分为5类,设计视觉交互实验收集相关行为意图状态下的眼动特征数据,利用SVM(Support Vector Machine)算法建立分类预测模型,结合差异性分析方法选取眼动特征分量,最终确定连续3个采样注视点的位置X坐标、Y坐标、注视时间、眼跳幅度以及瞳孔直径共15个分量作为特征参数可以获得较好的预测效果,其预测精度可达90%以上. 相似文献
14.
Novel eye gaze tracking techniques under natural head movement 总被引:1,自引:0,他引:1
Most available remote eye gaze trackers have two characteristics that hinder them being widely used as the important computer input devices for human computer interaction. First, they have to be calibrated for each user individually; second, they have low tolerance for head movement and require the users to hold their heads unnaturally still. In this paper, by exploiting the eye anatomy, we propose two novel solutions to allow natural head movement and minimize the calibration procedure to only one time for a new individual. The first technique is proposed to estimate the 3-D eye gaze directly. In this technique, the cornea of the eyeball is modeled as a convex mirror. Via the properties of convex mirror, a simple method is proposed to estimate the 3-D optic axis of the eye. The visual axis, which is the true 3-D gaze direction of the user, can be determined subsequently after knowing the angle deviation between the visual axis and optic axis by a simple calibration procedure. Therefore, the gaze point on an object in the scene can be obtained by simply intersecting the estimated 3-D gaze direction with the object. Different from the first technique, our second technique does not need to estimate the 3-D eye gaze directly, and the gaze point on an object is estimated from a gaze mapping function implicitly. In addition, a dynamic computational head compensation model is developed to automatically update the gaze mapping function whenever the head moves. Hence, the eye gaze can be estimated under natural head movement. Furthermore, it minimizes the calibration procedure to only one time for a new individual. The advantage of the proposed techniques over the current state of the art eye gaze trackers is that it can estimate the eye gaze of the user accurately under natural head movement, without need to perform the gaze calibration every time before using it. Our proposed methods will improve the usability of the eye gaze tracking technology, and we believe that it represents an important step for the eye tracker to be accepted as a natural computer input device. 相似文献
15.
16.
17.
In this paper we investigate the accuracy of estimating a person’s direction of gaze from remote imaging. The problem is addressed by a person independent, multistage fusion approach for eye landmark localization, followed by eye region analysis for actual gaze recognition. We test the proposed landmark localization system on three databases, showing superior accuracy than state of the art solutions. Finally, we show that, inspired by human perception, by incorporating the location of eyebrows, superior performance is achievable when estimating the gaze direction. Given the found results, we argue that computer vision systems for gaze recognition should mimic the human perception and incorporate the eyebrows. 相似文献
18.
19.
20.
Chul Woo Cho Ji Woo Lee Kwang Yong Shin Eui Chul Lee Kang Ryoung Park Heekyung Lee Jihun Cha 《ETRI Journal》2012,34(4):542-552
In this paper, a gaze estimation method is proposed for use with a large‐sized display at a distance. Our research has the following four novelties: this is the first study on gaze‐tracking for large‐sized displays and large Z (viewing) distances; our gaze‐tracking accuracy is not affected by head movements since the proposed method tracks the head by using a near infrared camera and an infrared light‐emitting diode; the threshold for local binarization of the pupil area is adaptively determined by using a p‐tile method based on circular edge detection irrespective of the eyelid or eyelash shadows; and accurate gaze position is calculated by using two support vector regressions without complicated calibrations for the camera, display, and user's eyes, in which the gaze positions and head movements are used as feature values. The root mean square error of gaze detection is calculated as 0.79° for a 30‐inch screen. 相似文献