首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 562 毫秒
1.
基于视线跟踪和手势识别的人机交互   总被引:9,自引:5,他引:4       下载免费PDF全文
肖志勇  秦华标 《计算机工程》2009,35(15):198-200
提出一种新的基于视线跟踪和手势识别的交互方式用于远距离操作计算机。系统通过摄像头采集用户的图像,利用图像识别算法检测人眼和手指的位置,由人眼和指尖的连线确定用户指向屏幕的位置,通过判别用户手势的变化实现各种操作,达到人机交互的目的。实验结果表明,该交互方式可以较好地定位屏幕和判断用户的操作,实现自然、友好的远距离人机交互。  相似文献   

2.
基于视觉技术的三维指尖跟踪算法   总被引:1,自引:0,他引:1  
基于手势的实时人机交互(HCI)在虚拟现实领域有着重要的理论和应用价值.通过双目摄像头,使用立体视觉技术可以实现指尖在三维空间的跟踪定位,进而实现指尖和虚拟空间三维物体的实时交互.这种技术可以实现三维鼠标以及用于虚实交互的三维游戏中.提出一种阈值结合混合多高斯的BGS算法,用它来得到手的区域,然后通过手轮廓K向量和手中心到指尖的距离判定指尖位置,再利用标记对摄像机进行标定,根据标定参数和两个图像中得到的指尖位置,重建指尖点三维坐标,最后在三维空间实施Kalman滤波来平滑指尖点轨迹并预测前景分割的范围.实验结果表明算法是有效的.  相似文献   

3.
一维图像识别实现虚拟触摸屏系统   总被引:2,自引:1,他引:1       下载免费PDF全文
为避免大屏幕触摸屏中人体遮挡无法识别手指的问题设计了一种虚拟触摸屏系统,由至少两个一维图像采集装置、显示屏和数据处理装置构成。一维图像采集装置可以是一维线性传感器,配合镜头、信号转换和接口电路实现;也可以是市场上易于购得的二维图像采集装置摄像头,使用其一维图像数据。由一维图像采集装置采集手指在虚拟触摸屏上的一维图像数据,根据多个一维坐标点位置与二维虚拟触摸屏上点位置的一一对应关系,由数据处理装置将其转换为手指触摸屏幕上的二维直角坐标,从而完成对应的操作,实现人机交互。  相似文献   

4.
图像型火灾空间定位研究   总被引:1,自引:0,他引:1  
大空间火源的空间位置的定位是火灾自动探测和自动扑救中间的重要环节。在分析火灾图像和以往图像型火灾空间的相关定位技术基础上,根据计算机视觉原理,利用固定在自动消防水炮末端的单一CCD摄像头随水炮旋转扫描时的角度、位移变化,将CCD摄像头在不同位置所拍摄的图像中火源图像坐标和火源的空间坐标建立联系,利用极线几何扫描式单摄像机空间定位原理实现了火源空间位置的自动定位。  相似文献   

5.
视觉是机器人获取外界信息最主要的途径,通过视觉系统准确地定位目标物体是机器人控制中的关键技术。为了使机器人准确地获取目标物体的位置,文中采用平面模板法对双目摄像头进行标定,构建机器人坐标系,完成对目标物体的定位。经过标定,双目摄像头可以获取目标物体在空间坐标系中的位置,经过坐标系转换,可以获得目标物体在机器人世界坐标系中的坐标值,该值是机器人实现对目标物体伺服跟踪和抓取的重要数据。最后,本方法在家庭机器人上得到了验证,机器人能够准确地定位目标物体。  相似文献   

6.
《电子技术应用》2016,(6):84-86
针对目前超大触摸屏价格昂贵、通用性差的问题,采用图像识别技术构建了基于ARM的四摄像头光学触摸屏系统。系统通过安装在4个顶点的CMOS摄像头同步采集触摸屏区域图像,ARM微处理器对采集的图像进行触摸点检测,根据触点成像位置和摄像头标定得到触点的方向直线,最后通过计算任意两条直线相交于一点来确定触点的位置。实验表明,此系统对单点和两点触摸能达到99%的识别率,触点坐标位置误差小于2%。  相似文献   

7.
《机器人》2016,(2)
针对空间缺少标定靶标物的难题,基于空间机器人机械手的精确末端定位能力,提出了一种通过规划机械手运动路径生成靶标从而实现在轨视觉标定的方法.首先介绍了视觉标定方法的总体框架.然后对该标定方法进行了误差分析,根据棋盘靶标标定误差与控制点图像坐标误差满足线性关系的先验知识,推导了从机械手末端定位到控制点图像坐标的误差传递关系,并对两类误差进行合成得到了等效的控制点图像坐标误差.最后利用仿真实验验证了标定误差与等效误差满足线性关系,分析了靶标尺寸、相对距离、控制点密度和靶标结构因素对标定精度的影响.  相似文献   

8.
随着智能手机的快速普及和广泛应用,利用手机进行定位的技术越来越受到人们关注。智能手机一般都具有摄像功能,并且有独立运算的能力,基于此,提出了一种基于摄影测量的手机视觉定位方法。该方法事先在提供视觉定位服务的区域拍摄基准图像,建立基准图像库;当手机用户进入定位服务区时,利用手机摄像头拍摄实时图像,在基准库中通过图像匹配方法搜索两幅与该图像具有相同重叠区域的基准图像,并得到三度重叠特征点。对这两幅基准图像,利用前方交会方法求出重叠区域特征点的空间坐标,再根据这些空间点坐标对实时图像进行后方交会,解算用户的位置和姿态。实验结果表明,该方法能够达到米级的定位精度。  相似文献   

9.
航空相机用来获得地面图像信息,是圆满完成侦察任务的重要的前提条件。提出了一种定位航空相机最佳成像位置的系统,论述了系统的工作原理、硬件组成。该系统采用光学准直系统产生定位信息,利用CCD摄像头作光电转换器件接收图像,采用最小二乘法处理数据,由测控系统控制摄像头轴向的精确平移,根据采集的图像尺寸定位相机的最佳成像位置,利用实际成像面与最佳成像位置之间的距离判断是否离焦。通过对多部航空相机进行测试,是否离焦的判断与试飞结果完全相同。  相似文献   

10.
为了达到通过任意摄像头拍摄照片就可对目标点进行定位的目的,针对如何用最少信息得到空间中目标点坐标问题,基于摄像机的成像模型、空间中固定点之间的几何约束,以及坐标系变换的基本原理,推导出一种类似于P4P的无标定照片的目标点定位方法。该方法可由观测到的两幅图像中已知世界坐标系中位置坐标的任意四点在图像中的位置,通过计算得到图像中目标点的三维坐标。此方法需要的已知信息极少,对于拍摄图像以及拍摄所用相机没有要求,并通过实验验证可行,精度相比传统标定方法没有明显损失,且较其他自标定定位方法更高,实用性较强。  相似文献   

11.
针对基于Time-of-Flight(TOF)相机的彩色目标三维重建需标定CCD相机与TOF相机联合系统的几何参数,在研究现有的基于彩色图像和TOF深度图像标定算法的基础上,提出了一种基于平面棋盘模板的标定方法。拍摄了固定在平面标定模板上的彩色棋盘图案在不同角度下的彩色图像和振幅图像,改进了Harris角点提取,根据棋盘格上角点与虚拟像点的共轭关系,建立了相机标定系统模型,利用Levenberg-Marquardt算法求解,进行了标定实验。获取了TOF与CCD相机内参数,并利用像平面之间的位姿关系估计两相机坐标系的相对姿态,最后进行联合优化,获取了相机之间的旋转矩阵与平移向量。实验结果表明,提出的算法优化了求解过程,提高了标定效率,能够获得较高的精度。  相似文献   

12.
针对单相机场景下的智能交通应用得到了较好的发展,但跨区域研究还处在起步阶段的问题,本文提出一种基于相机标定的跨相机场景拼接方法.首先,通过消失点标定分别建立两个相机场景中子世界坐标系下的物理信息到二维图像的映射关系;其次通过两个子世界坐标系间的公共信息来完成相机间的投影变换;最后,通过提出的逆投影思想和平移矢量关系来完成道路场景拼接.通过实验结果表明,该方法能够较好的实现道路场景拼接和跨区域道路物理测量,为相关实际应用研究奠定基础.  相似文献   

13.
Yang  Zhao  Zhao  Yang  Hu  Xiao  Yin  Yi  Zhou  Lihua  Tao  Dapeng 《Multimedia Tools and Applications》2019,78(9):11983-12006

The surround view camera system is an emerging driving assistant technology that can assist drivers in parking by providing top-down view of surrounding situations. Such a system usually consists of four wide-angle or fish-eye cameras that mounted around the vehicle, and a bird-eye view is synthesized from images of these cameras. Commonly there are two fundamental problems for the surround view synthesis, geometric alignment and image synthesis. Geometric alignment performs fish-eye calibration and computes the image perspective transformation between the bird-eye view and images from the surrounding cameras. Image synthesis technique dedicates to seamless stitch between adjacent views and color balancing. In this paper, we propose a flexible central-around coordinate mapping (CACM) model for vehicle surround view synthesis. The CACM model calculates perspective transformation between a top-view central camera coordinate and the around camera coordinates by a marker point based method. With the transformation matrices, we could generate the pixel point mapping relationship between the bird-eye view and images of the surrounding cameras. After geometric alignment, an image fusion method based on distance weighting is adopted for seamless stitch, and an effective overlapping region brightness optimization method is proposed for color balancing. Both the seamless stitch and color balancing can be easily operated by using two types of weight coefficient under the framework of the CACM model. Experimental results show that the proposed approaches could provide a high-performance surround view camera system.

  相似文献   

14.
针对目前大型精密零件自动化测量和自动化装配的迫切需求,提出一种精密零件轮廓自动化测量方法;首先利用激光跟踪仪建立全局坐标系,基于世界统一法对多个相机坐标系进行全局标定;采用像素点聚类的方式、漫水填充法等图像处理方法对存在重影以及环境光的激光光条中心进行精确提取,利用统计滤波方法对三维数据点云进行杂点的去除,结合粗匹配和精细匹配方法对多组相机下的激光光条进行三维空间点云融合,移动产品,对精密零件进行全产品扫描,最后,借助于射线法对精密零件轮廓进行超差预警;实验表明,该方法对大型零部件产品的测量误差标准差为0.25 mm,15 min可以完成20 m长的大型零部件产品的测量与分析,足够满足现有大型精密复杂产品的装配需求,突破了现有技术瓶颈,可以对精密零件轮廓进行快速精确三维重构,并且可以计算精密零件产品的结构化参数。  相似文献   

15.
This paper proposes a new gaze-detection method based on a 3-D eye position and the gaze vector of the human eyeball. Seven new developments compared to previous works are presented. First, a method of using three camera systems, i.e., one wide-view camera and two narrow-view cameras, is proposed. The narrow-view cameras use autozooming, focusing, panning, and tilting procedures (based on the detected 3-D eye feature position) for gaze detection. This allows for natural head and eye movement by users. Second, in previous conventional gaze-detection research, one or multiple illuminators were used. These studies did not consider specular reflection (SR) problems, which were caused by the illuminators when working with users who wore glasses. To solve this problem, a method based on dual illuminators is proposed in this paper. Third, the proposed method does not require user-dependent calibration, so all procedures for detecting gaze position operate automatically without human intervention. Fourth, the intrinsic characteristics of the human eye, such as the disparity between the pupillary and the visual axes in order to obtain accurate gaze positions, are considered. Fifth, all the coordinates obtained by the left and right narrow-view cameras, as well as the wide-view camera coordinates and the monitor coordinates, are unified. This simplifies the complex 3-D converting calculation and allows for calculation of the 3-D feature position and gaze position on the monitor. Sixth, to upgrade eye-detection performance when using a wide-view camera, the adaptive-selection method is used. This involves an IR-LED on/off scheme, an AdaBoost classifier, and a principle component analysis method based on the number of SR elements. Finally, the proposed method uses an eigenvector matrix (instead of simply averaging six gaze vectors) in order to obtain a more accurate final gaze vector that can compensate for noise. Experimental results show that the root mean square error of gaze detection was about 0.627 cm on a 19-in monitor. The processing speed of the proposed method (used to obtain the gaze position on the monitor) was 32 ms (using a Pentium IV 1.8-GHz PC). It was possible to detect the user's gaze position at real-time speed.  相似文献   

16.
提出了双摄像机模组的组合式标定和校正方法,能够将传统的标定和校正2道工序合并为1道工序,不需要借助于外部测量设备,仅利用双摄像机同时对目标模板拍摄的1幅图像,即可实现双摄像机模组的标定和校正。先基于交比不变性计算摄像机的径向畸变系数,将摄像机畸变成像模型转换为线性模型,利用线性模型分别对2个摄像机进行标定;然后计算2个摄像机之间的位姿偏移参数,调节右摄像机位姿,进行双摄像机之间的位姿校正;最后标定2个摄像机之间的位姿参数。实际应用结果表明,所提出的双摄像机模组校正和标定方法,校正和标定精度高,缩短了工艺时间,提高了工艺效率,能够满足双摄像机模组封装生产工艺的要求。  相似文献   

17.
提出一种复杂背景下检测单指指尖位置的方法,该方法使用Digiclops立体视觉系统采集图像,并得到手指区域的子图像。对于手指正指的情况,可迅速计算出指尖的位置;对于手指侧指的情况,在手指图像基础上,设计一种鲁棒的指尖检测算法定位出指尖的位置。实验表明,该方法对指尖位置检测准确,用该方法处理每一帧图像,可实时跟踪指尖,从而实现了基于指尖跟踪的感知用户界面系统。  相似文献   

18.
Stereovision is an effective technique to use a CCD video camera to determine the 3D position of a target object from two or more simultaneous views of the scene. Camera calibration is a central issue in finding the position of objects in a stereovision system. This is usually carried out by calibrating each camera independently, and then applying a geometric transformation of the external parameters to find the geometry of the stereo setting. After calibration, the distance of various target objects in the scene can be calculated with CCD video cameras, and recovering the 3D structure from 2D images becomes simpler. However, the process of camera calibration is complicated. Based on the ideal pinhole model of a camera, we describe formulas to calculate intrinsic parameters that specify the correct camera characteristics, and extrinsic parameters that describe the spatial relationship between the camera and the world coordinate system. A simple camera calibration method for our CCD video cameras and corresponding experiment results are also given. This work was presented in part at the 7th International Symposium on Artificial Life and Robotics, Oita, Japan, January 16–18, 2002  相似文献   

19.

In this paper, we propose a new video conferencing system that presents correct gaze directions of a remote user by switching among images obtained from multiple cameras embedded in a screen according to a local user’s position. Our proposed method reproduces a situation like that in which the remote user is in the same space as the local user. The position of the remote user to be displayed on the screen is determined so that the positional relationship between the users is reproduced. The system selects one of the embedded cameras whose viewing direction towards the remote user is the closest to the local user’s viewing direction to the remote user’s image on the screen. As a result of quantitative evaluation, we confirmed that, in comparison with the case using a single camera, the accuracy of gaze estimation was improved by switching among the cameras according to the position of the local user.

  相似文献   

20.
A measurement technique for kinematic calibration of robot manipulators, which uses a stereo hand-eye system with moving camera coordinates, is presented in this article. The calibration system consists of a pair of cameras rigidly mounted on the robot end-effector, a camera calibration board, and a robot calibration fixture. The stereo cameras are precalibrated using the camera calibration board so that the 3D coordinates of any object point seen by the stereo cameras can be computed with respect to the camera coordinate frame [C] defined by the calibration board. Because [C] is fixed with respect to the tool frame [T] of the robot, it moves with the robot hand from one calibration measurement configuration to another. On each face of the robot calibration fixture that defines the world coordinate frame [W], there are evenly spaced dot patterns of uniform shape. Each pattern defines a coordinate frame [Ei], whose pose is known in [W]. The dot pattern is designed in such a way that from a pair of images of the pattern, the pose of [Ei] can be estimated with respect to [C] in each robot calibration measurement. By that means the pose of [C] becomes known in [W] at each robot measurement configuration. For a sufficient number of measurement configurations, the homogeneous transformation from [W] to [C] (or equivalently to [T]), and thus the link parameters of the robot, can be identified using the least-squares techniques. Because the cameras perform local measurements only, the field-of-view of the camera system can be as small as 50 × 50 mm2, resulting in an overall accuracy of the measurement system as high as 0.05 mm. This is at least 20 times better than the accuracy provided by vision-based measurement systems with a fixed camera coordinate frame using common off-the-shelf cameras. © 1994 John Wiley & Sons, Inc.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号