首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
本文给出了orbocup中型组足球比赛机器人在全维视觉情况下,对足球比赛中目标定位的一种方法.文中推导了一种针对全向摄像机镜面投影的变换关系,该方法简单有效,为全维视觉下的图像处理提供了必要模型.实验结果证明了该种方法的有效性.  相似文献   

2.
王宇  王勇  徐心和 《微计算机信息》2006,22(34):259-264
利用空间直线的中心折反射投影的特性对全维视觉系统的标定方法进行了研究。应用光线追迹法提取空间直线折反射图像上的点,并提出一种基于最小二乘理论的二次曲线拟合方法,将其应用到直线折反射图像的拟合中,最终求出系统成像参数,完成对全维视觉系统的标定。对真实全维视觉系统的标定实验验证了该方法的实用性。  相似文献   

3.
基于动作视觉协调的足球机器人视觉跟踪方法   总被引:2,自引:2,他引:2  
文章提出了一种基于动作视觉协调的足球机器人视觉跟踪方法,它跟踪准确,对光照环境的适应性强。实验表明,该方法简单有效。该方法已应用于某校研制的Mirosot机器人足球比赛系统,并且参加比赛取得了优良成绩。  相似文献   

4.
在机器人足球比赛中,要识别的目标颜色越来越多,易出现识别错误。为了解决机器人足球比赛色标识别困难,介绍了一种新的视觉系统设计方法,提出了一种按假想机器人尺寸避免碰撞的方法,对机器人只需识别队标,既减少要识别的颜色样本,又增加了目标跟踪的准确性。该文论述了设计的各个环节。试验表明,设计方法合理有效,鲁棒性好。  相似文献   

5.
在FIRA MiroSot机器人足球比赛中,视觉系统是获得比赛场上机器人与球位置信息的唯一途径。视觉系统的识别速度、精度直接影响到比赛的胜负。针对传统视觉系统在机器人足球比赛中获取各实体的位置不够准确的问题,提出了一种结合数学形态学中腐蚀/膨胀算法来处理视觉系统中的实时图像,增加足球机器人视觉系统识别的精度的设计方案。实验结果表明,该方案在没有降低比赛中识别速度的前提下,大大地提高了识别精度。  相似文献   

6.
在机器人足球中,机器人周围的环境信息被颜色特殊化.在对基于颜色阈值分割的分析基础上,针对不同光线的情况下提出了改进的颜色阈值分割法,实现了全维视觉和前向单目视觉对场上不同颜色目标的识别.在此基础上实现了前项单目视觉和全维视觉的目标定位,利用Kalman滤波算法,实现二者的信息融合,从而实现更准确的目标定位信息.实验结果...  相似文献   

7.
足球比赛仿真中虚拟人的感知建模研究   总被引:1,自引:0,他引:1  
王静  姜昱明 《计算机仿真》2008,25(1):184-187,277
感知建模是虚拟人足球比赛仿真行为建模的一个重要部分,直接影响着虚拟人的行为决策.根据虚拟球员的注意力聚焦规律、视觉生理特点,提出了一种基于注意力聚焦机制的虚拟球员的智能化感知模型,包括虚拟视觉、虚拟听觉和感知聚焦器.在几何视觉模型基础上建立了虚拟视觉模型,采用包围盒技术设计了视野感知器,障碍物感知器采用人工势场法计算出虚拟球员的避碰情况下的视觉方向;根据足球比赛中球员的行为特点,设计了感知聚焦器.实验结果表明该模型是可行的.  相似文献   

8.
FIRAMiroSot机器人足球比赛中,视觉系统是比赛系统获得环境信息的唯一途径。视觉系统的识别速度、精度直接影响到比赛的胜负。针对传统的视觉系统在机器人足球比赛中的缺点,提出基于多分辨率分析与FCM算法的新型足球机器人视觉系统的设计方法,实验结果表明:该设计方案能够提高比赛中的识别速度和精度,并具有良好的适应性。  相似文献   

9.
根据RoboCup中NAO机器人足球比赛的规则,机器人通过对颜色的识别来识别球场上的物体及环境。文章分析了RGB颜色空间的优缺点,采用HSV作为视觉系统的颜色空间,减弱了现场光照变化对视觉系统产生的影响。文章提出了一种基于HSV色彩空间下识别红球的方法,然后提出一种简单的追踪红球策略,与测距追踪实际效果相比较,发现这种追踪策略更符合实际场景应用。  相似文献   

10.
视觉系统是咱主体机器人感知外界坏境信息的重要环节.颜色分割是视觉系统中图像处理的第一步.目前公认效果较好的方法采取的是颜色表查找法.本文以四腿机器人足球比赛为背景和研究平台,提出一种快速制作颜色表的方法,并在实时环境下运用了图像增强技术,改进了颜色分割的算法,提高了机器人视觉系统的鲁棒性,并应用于2006年的RoboCup中国机器人大赛中,得到了较好的效果.  相似文献   

11.
深入分析和研究了全景图像的解算算法与稳像的实现方法,提出了全景摄像系统的稳像方法——基于灰度投影的电子稳像算法。该系统的硬件由全景装置、CCD、图像采集卡和计算机构成,软件编程基于DirectShow平台,通过VC++实现。通过对结果的分析表明,该算法的设计具有快速、准确的特征,是实现全景摄像系统的电子稳像的一种切实可行的方法。  相似文献   

12.
《Advanced Robotics》2013,27(3):205-220
In this paper, we describe a visual servoing system developed as a human-robot interface to drive a mobile robot toward any chosen target. An omni-directional camera is used to get the 360° of field of view and an efficient tracking technique is developed to track the target. The use of the omni-directional geometry eliminates many of the problems common in visual tracking and makes the use of visual servoing a practical alternative for robot-human interaction. The experiments demonstrate that it is an effective and robust way to guide a robot. In particular, the experiments show robustness of the tracker to loss of template, vehicle motion, and change in scale and orientation.  相似文献   

13.
基于立体折反射全向成像的柱面全景深度估算   总被引:1,自引:0,他引:1  
针对立体视觉原理的新型立体折反射全向成像系统结构设计和面向立体柱面全景像对的局域灰度相关对应点快速匹配算法,从捕获的全向市体影像中提取有效深度信息,用于辅助全向视频分析处理中的对象检测和跟踪.采用单相机和两个不同参数的抛物面型反射镜构造了一种共轴结构的折反射全向立体成像装置,捕获的存在一定视差的原始全向立体像对被投影展开为立体柱面全景像对,而后通过特定对应点匹配算法提取稠密的深度信息.对应点匹配算法采用局部区域灰度相关的算了,并充分利用了双向匹配和柱面全景的外极线约束来提高匹配的速度和准确度.仿真实验有效恢复了场景深度信息,证明了整套装置结构设计及深度估计方法的有效性.  相似文献   

14.
Constructing Virtual Cities by Using Panoramic Images   总被引:1,自引:0,他引:1  
Simultaneously acquired omni-directional images contain rays of 360 degree viewing directions. To take advantage of this unique characteristic, we have been developing several methods for constructing virtual cities. In this paper, we first describe a system to generate the appearance of a virtual city; the system, which is based on image-based rendering (IBR) techniques, utilizes the characteristics of omni-directional images to reduce the number of samplings required to construct such IBR images. We then describe a method to add geometric information to the IBR images; this method is based on the analysis of a sequence of omni-directional images. Then, we describe a method to seamlessly superimpose a new building model onto a previously created virtual city image; the method enables us to estimate illumination distributions by using an omni-directional camera. Finally, to demonstrate the methods' effectiveness, we describe how we implemented and applied them to urban scenes.  相似文献   

15.
王媛媛  陈旺  张茂军  王炜  徐玮 《计算机应用》2011,31(9):2477-2480
提出一种折反射全向图像与遥感图像配准的建筑物高度提取算法,可应用于大范围三维城市重建。首先,利用全向Hough变换方法提取折反射全向图像中建筑物的顶部边界线;然后基于提取的边界线,根据空间水平直线全向成像的角度不变性对折反射全向图像与遥感图像进行配准;最后利用配准结果,依据折反射全向图成像模型计算建筑物高度。实验结果证明该方法简捷易行且计算结果准确,误差较小。  相似文献   

16.
《Advanced Robotics》2013,27(12):1369-1391
This paper presents an omni-directional mobile microrobot for micro-assembly in a micro-factory. A novel structure is designed for omni-directional movement with three normal wheels. The millimeter-sized microrobot is actuated by four electromagnetic micromotors whose size is 3.1 mm × 3.1 mm × 1.4 mm. Three of the micromotors are for translation and the other one is for steering. The micromotor rotors are designed as the wheels to reduce the microrobot volume. A piezoelectric micro-gripper is fabricated for grasping micro-parts. The corresponding kinematics matrix is analyzed to prove the omni-directional mobility. A control system composed of two CCD cameras, a host computer and circuit board is designed. The macro camera is for a global view and the micro camera is for local supervision. Unique location methods are proposed for different scenarios. A microstep control approach for the micromotors is presented to satisfy the requirement of high positioning accuracy. The experiment demonstrates the mobility of the microrobot and the validity of the control system.  相似文献   

17.
We introduce a generic structure-from-motion approach based on a previously introduced, highly general imaging model, where cameras are modeled as possibly unconstrained sets of projection rays. This allows to describe most existing camera types including pinhole cameras, sensors with radial or more general distortions, catadioptric cameras (central or non-central), etc. We introduce a structure-from-motion approach for this general imaging model, that allows to reconstruct scenes from calibrated images, possibly taken by cameras of different types (cross-camera scenarios). Structure-from-motion is naturally handled via camera independent ray intersection problems, solved via linear or simple polynomial equations. We also propose two approaches for obtaining optimal solutions using bundle adjustment, where camera motion, calibration and 3D point coordinates are refined simultaneously. The proposed methods are evaluated via experiments on two cross-camera scenarios—a pinhole used together with an omni-directional camera and a stereo system used with an omni-directional camera.  相似文献   

18.
A novel approach to location estimation by omni-directional vision for autonomous vehicle navigation in indoor environments using circular landmark information is proposed. A circular-shaped landmark is attached on a ceiling and an omni-directional camera is equipped on a vehicle to take upward-looking omni-directional images of the landmark. This way of image taking reduces possible landmark shape occlusion and image noise creation, which come from the existence of nearby objects or humans surrounding the vehicle. It is shown that the perspective shape of the circular landmark in the omni-directional image may be approximated by an ellipse by analytic formulas with good shape-fitting effect and fast computation speed. The parameters of the ellipse are then used for estimating the location of the vehicle with good precision for navigation guidance. Both simulated and real images were tested and good experimental results confirm the feasibility of the proposed approach.  相似文献   

19.
基于单相机双抛物面反射镜设计了一种共轴结构的折反射全向立体成像装置,给出了针对相应展开柱面全景立体图像对深度估计的应点匹配方法,最后通过3D Max构造相应虚拟装置和虚拟场景进行了仿真实验,初步证明了该结构设计和对应深度估计方法的有效性。  相似文献   

20.
An autonomous mobile robot must have the ability to navigate in an unknown environment. The simultaneous localization and map building (SLAM) problem have relation to this autonomous ability. Vision sensors are attractive equipment for an autonomous mobile robot because they are information-rich and rarely have restrictions on various applications. However, many vision based SLAM methods using a general pin-hole camera suffer from variation in illumination and occlusion, because they mostly extract corner points for the feature map. Moreover, due to the narrow field of view of the pin-hole camera, they are not adequate for a high speed camera motion. To solve these problems, this paper presents a new SLAM method which uses vertical lines extracted from an omni-directional camera image and horizontal lines from the range sensor data. Due to the large field of view of the omni-directional camera, features remain in the image for enough time to estimate the pose of the robot and the features more accurately. Furthermore, since the proposed SLAM does not use corner points but the lines as the features, it reduces the effect of illumination and partial occlusion. Moreover, we use not only the lines at corners of wall but also many other vertical lines at doors, columns and the information panels on the wall which cannot be extracted by a range sensor. Finally, since we use the horizontal lines to estimate the positions of the vertical line features, we do not require any camera calibration. Experimental work based on MORIS, our mobile robot test bed, moving at a human’s pace in the real indoor environment verifies the efficacy of this approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号