首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 156 毫秒
1.
四路摄像头协同的多重触控系统   总被引:2,自引:0,他引:2  
为解决多重触控系统中多触点遮挡问题,提出一个基于四路摄像头协同工作的多重触控系统,其中包括硬件层、图像处理层、图像理解层和应用层.硬件层由四路摄像头与多个红外二极管围成的矩形边框组成;图像处理层完成图像的获取和触点检测;图像理解层实现触点的定位与跟踪;应用层完成与具体应用基于手势的交互.图片管理应用实验结果表明,该系统能实现基于多指的复杂操作,在公共信息查询、指挥决策等领域具有很好的应用前景.  相似文献   

2.
谢斐  蔡山  王德鑫  陈超 《计算机科学》2011,38(9):253-256
四路摄像头协同多重触控系统通过拟合四路摄像头得到触点方向线,以解决多触点遮档问题。针对该原理中触点方向线构建问题,提出查找表法、消失点法和立体标定法3种系统标定方法,以实现多触点精确定位。查找表法在交互区边框上设定标定点,通过插值查找对应标定点,以光心和标定点构建触点方向线;消失点法利用触点方向上的消失点和光心构建触点方向线;立体标定法利用极线几何反投影射线构建触点方向线。误差分析实验结果表明,查找表法适用于较小尺寸平台,消失点法适用于大尺寸平台,立体标定方法精度有待提高。  相似文献   

3.
《电子技术应用》2016,(6):84-86
针对目前超大触摸屏价格昂贵、通用性差的问题,采用图像识别技术构建了基于ARM的四摄像头光学触摸屏系统。系统通过安装在4个顶点的CMOS摄像头同步采集触摸屏区域图像,ARM微处理器对采集的图像进行触摸点检测,根据触点成像位置和摄像头标定得到触点的方向直线,最后通过计算任意两条直线相交于一点来确定触点的位置。实验表明,此系统对单点和两点触摸能达到99%的识别率,触点坐标位置误差小于2%。  相似文献   

4.
针对在多机器人协作和服务机器人目标追踪的过程中,需要对目标物体或者同伴进行准确的目标定位这个问题,设计了一个以ARM9微处理器为处理模块、CMOS摄像头为采集模块的一个嵌入式视觉系统,实现了单目视觉测距定位.该系统设计大体分为目标识别和定位两部分,目标标识使用一个纯色的色标块,通过对摄像头采集的图像中检测色标块区域来实现目标物体定位;对色标块的图像分割采用基于RGB三个颜色分量的阈值分割算法,单目测距采用P4P方法,其中,摄像机的内参数标定则借助于Matlab工具箱来完成.同时,采用RANSAC方法确定色标块的四条边界直线,以四条直线交点作为色标块的4个角点,从而提高角点检测的精度.实验结果表明,该测距定位系统定位精度较高,系统鲁棒性好,具有良好的应用前景.  相似文献   

5.
提出一种基于候选区匹配的控制PTZ(Pan,Tilt,zoom)摄像头的目标跟踪方法.该方法将图像的中心区域或者检测的目标运动区域作为候选区对目标定位,并根据目标的预测位置计算摄像头运动参数,实现摄像头自动控制跟踪目标.采用对称差分来检测图像中的目标运动区域,利用颜色和纹理信息等特征来表示目标,通过相似性度量从候选区定位目标.同时考虑摄像头命令的传输和执行延时,利用目标运动轨迹对目标位置加以预测,并根据目标预测位置自动控制PTZ摄像头主动追踪目标.实验表明,该方法在较大区域内,以及瞬时遮挡条件下,能连续实时地主动追踪感兴趣目标.  相似文献   

6.
针对单固定摄像头的视频监控系统对合并遮挡目标跟踪效果不好导致跟踪失败的问题,提出了一个稀疏多目标跟踪框架.该框架对系统的目标的合并遮挡和跟踪滤波这两个部分作了改进.系统由运动目标检测、关联矩阵建立、目标交互处理和滤波四部分组成.首先提取前景区域并建立关联矩阵;然后用关联矩阵判断各目标运动状态并进行相应处理,当目标发生交互时,用TLD算法跟踪,为了提高TLD的跟踪效率和减少TLD的初始化异常情况,用双三次插值对目标和跟踪窗口进行同比例缩放;最后用分数阶卡尔曼滤波对跟踪结果进行滤波.实验结果证明,该框架能有效提高单固定摄像头对目标交互遮挡情况的处理能力.  相似文献   

7.
现有多触点交互桌面系统通常只提供触点位置、形状等信息,不包括触点左右手归属信息.触点所属手及左右手手性信息的提供,对于多指手势识别、丰富双手交互技术,尤其是非对称双手交互技术具有重要意义.基于手的解剖结构特征,提出一种不需辅助硬件设备的触点左右手归属判定方法.首先,以手势设计的基本原则为指导,根据手的解剖结构特征,提出交互桌面手-臂系统三角形模型;其次,基于该三角形模型,给出多触点交互桌面同手触点聚类方法及左右手识别方法;然后,对MTDriver做扩展,实现了提供左右手信息的多触点跟踪工具箱,并提出了基于左右手信息的交互桌面交互技术;最后,评估结果表明,同手触点聚类方法及触点左右手手性识别方法均具有较高的正确率,而且在时间性能方面满足交互桌面交互实时性要求.  相似文献   

8.
针对目前光学多点触控系统存在的摄像头取景图像特性对触点校正精准度的限制,研究高性能的触点坐标校正形式,提出了一种通用的触点校准算法,使其扩展了对摄像头及安装环境的适应性.通过引进图像配准方法,利用特征点提取实现坐标系的高度统一;采用动态查找表法,提高图像配准效率.理论研究和实验结果验证了该方法的有效性.  相似文献   

9.
针对传统方法对低分辨率无自动对焦功能摄像头拍摄的QR码图像进行定位提取成功率低的缺点,提出了一种新的QR码定位与提取方法。该方法利用QR码寻像图形轮廓上的特征进行条码定位,然后利用直线最佳逼近法进行边界轮廓的寻找,找出四条边界,最后利用平面坐标变换法将条码图像旋转至水平,形成标准QR码图像,从而完成QR码图像的提取。  相似文献   

10.
力触觉的生成和交互有助于提升虚拟环境的沉浸感和真实性.提出了一种双触点交互的力触觉实时生成方法.把双触点交互分为4种状态,分别提出不同状态下双触点交互的力触觉实时生成算法,并建立虚双触点交互的实验环境,开展双触点交互力触觉生成的评价研究.实验结果表明,该方法能够实时生成双触点交互时的力触觉,增强虚拟环境的沉浸感和真实性,提升了人手力触觉交互的自然性.  相似文献   

11.

Object tracking still remains challenging in computer vision because of the severe object variation, e.g., deformation, occlusion, and rotation. To handle the object variation and achieve robust object tracking performance, we propose a novel relationship-based tracking algorithm using neural networks in this paper. Compared with existing approaches in the literature, our method assumes the targeted object to be consisted of several parts and considers the evolution of the topology structure among these parts. After training a candidate neural network for predicting the probable areas each part may locate at in the successive frame, we then design a novel collaboration neural network to determine the precise area each part will locate at with account for the topology structure among these individual parts, which is learned from their historical physical locations during online tracking process. Experimental results show that the proposed method outperforms state-of-the-art trackers on a benchmark dataset, yielding the significant improvements in accuracy on high-distorted sequences.

  相似文献   

12.
监控系统中的多摄像机协同   总被引:8,自引:0,他引:8  
描述了一个用于室内场合对多个目标进行跟踪的分布式监控系统.该系统由多个廉价的固定镜头的摄像机构成,具有多个摄像机处理模块和一个中央模块用于协调摄像机间的跟踪任务.由于每个运动目标有可能被多个摄像机同时跟踪,因此如何选择最合适的摄像机对某一目标跟踪,特别是在系统资源紧张时,成为一个问题.提出的新算法能根据目标与摄像机之间的距离并考虑到遮挡的情况,把目标分配给相应的摄像机,因此在遮挡出现时,系统能把遮挡的目标分配给能看见目标并距离最近的那个摄像机.实验表明该系统能协调好多个摄像机进行目标跟踪,并处理好遮挡问题.  相似文献   

13.
When occlusion is minimal, a single camera is generally sufficient to detect and track objects. However, when the density of objects is high, the resulting occlusion and lack of visibility suggests the use of multiple cameras and collaboration between them so that an object is detected using information available from all the cameras in the scene.In this paper, we present a system that is capable of segmenting, detecting and tracking multiple people in a cluttered scene using multiple synchronized surveillance cameras located far from each other. The system is fully automatic, and takes decisions about object detection and tracking using evidence collected from many pairs of cameras. Innovations that help us tackle the problem include a region-based stereo algorithm capable of finding 3D points inside an object knowing only the projections of the object (as a whole) in two views, a segmentation algorithm using bayesian classification and the use of occlusion analysis to combine evidence from different camera pairs.The system has been tested using different densities of people in the scene. This helps us determine the number of cameras required for a particular density of people. Experiments have also been conducted to verify and quantify the efficacy of the occlusion analysis scheme.  相似文献   

14.
In this work we propose algorithms to learn the locations of static occlusions and reason about both static and dynamic occlusion scenarios in multi-camera scenes for 3D surveillance (e.g., reconstruction, tracking). We will show that this leads to a computer system which is able to more effectively track (follow) objects in video when they are obstructed from some of the views. Because of the nature of the application area, our algorithm will be under the constraints of using few cameras (no more than 3) that are configured wide-baseline. Our algorithm consists of a learning phase, where a 3D probabilistic model of occlusions is estimated per-voxel, per-view over time via an iterative framework. In this framework, at each frame the visual hull of each foreground object (person) is computed via a Markov Random Field that integrates the occlusion model. The model is then updated at each frame using this solution, providing an iterative process that can accurately estimate the occlusion model over time and overcome the few-camera constraint. We demonstrate the application of such a model to a number of areas, including visual hull reconstruction, the reconstruction of the occluding structures themselves, and 3D tracking.  相似文献   

15.
针对目标跟踪过程中的遮挡、形变以及长时跟踪等问题进行研究,提出一种多特征融合且抗遮挡的长时目标跟踪算法。以判别尺度空间(DSST)算法为框架,融合颜色空间特征,引入APCE指标,增强目标位置的预测和抗遮挡能力,提高算法的鲁棒性;增加随机蕨分类器检测机制,在跟踪失败时对目标进行重新检测定位;在模型更新阶段,利用帧差法调整模型的更新率。实验结果表明,改进算法在目标遮挡、形变以及长时跟踪等复杂情况下的跟踪性能均优于其它经典算法。  相似文献   

16.
多相机间运动目标的跟踪与识别需要获得尽可能准确的目标区域。针对人群目标的粘连问题,提出一种基于姿态模型的人群目标分割方法。依据人体在运动过程中姿态的变化规律,构造7种出现频率较高的姿态模型。依次对单个目标和联合目标进行模型匹配,获得各个目标的位置、大小以及运动姿态信息。实验结果表明,该方法能有效解决相互遮挡情况下的目标分割问题。  相似文献   

17.
In this paper, we introduce a method to estimate the object’s pose from multiple cameras. We focus on direct estimation of the 3D object pose from 2D image sequences. Scale-Invariant Feature Transform (SIFT) is used to extract corresponding feature points from adjacent images in the video sequence. We first demonstrate that centralized pose estimation from the collection of corresponding feature points in the 2D images from all cameras can be obtained as a solution to a generalized Sylvester’s equation. We subsequently derive a distributed solution to pose estimation from multiple cameras and show that it is equivalent to the solution of the centralized pose estimation based on Sylvester’s equation. Specifically, we rely on collaboration among the multiple cameras to provide an iterative refinement of the independent solution to pose estimation obtained for each camera based on Sylvester’s equation. The proposed approach to pose estimation from multiple cameras relies on all of the information available from all cameras to obtain an estimate at each camera even when the image features are not visible to some of the cameras. The resulting pose estimation technique is therefore robust to occlusion and sensor errors from specific camera views. Moreover, the proposed approach does not require matching feature points among images from different camera views nor does it demand reconstruction of 3D points. Furthermore, the computational complexity of the proposed solution grows linearly with the number of cameras. Finally, computer simulation experiments demonstrate the accuracy and speed of our approach to pose estimation from multiple cameras.  相似文献   

18.
This paper proposes an inference method to construct the topology of a camera network with overlapping and non-overlapping fields of view for a commercial surveillance system equipped with multiple cameras. It provides autonomous object detection, tracking and recognition in indoor or outdoor urban environments. The camera network topology is estimated from object tracking results among and within FOVs. The merge-split method is used for object occlusion in a single camera and an EM-based approach for extracting the accurate object feature to track moving people and establishing object correspondence across multiple cameras. The appearance of moving people and the transition time between entry and exit zones is measured to track moving people across blind regions of multiple cameras with non-overlapping FOVs. Our proposed method graphically represents the camera network topology, as an undirected weighted graph using the transition probabilities and 8-directional chain code. The training phase and the test were run with eight cameras to evaluate the performance of our method. The temporal probability distribution and the undirected weighted graph are shown in the experiments.  相似文献   

19.
基于多人遮挡的定位跟踪算法   总被引:2,自引:1,他引:1  
毛爽  方颖  陈曙  王汇源 《计算机工程》2009,35(8):220-221
针对传统视频跟踪算法中存在的问题,提出一种基于多人遮挡问题的定位跟踪算法,该算法采用改进的投影分析方法对运动目标进行定位,并结合Kalman滤波进行匹配跟踪。仿真实验结果表明,该算法能够有效区分、定位、跟踪处于2种不同遮挡状态下的每个人的位置,具有一定应用价值。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号