首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
This paper describes an on-board vision sensor system that is developed specifically for small unmanned vehicle applications. For small vehicles, vision sensors have many advantages, including size, weight, and power consumption, over other sensors such as radar, sonar, and laser range finder, etc. A vision sensor is also uniquely suited for tasks such as target tracking and recognition that require visual information processing. However, it is difficult to meet the computing needs of real-time vision processing on a small robot. In this paper, we present the development of a field programmable gate array-based vision sensor and use a small ground vehicle to demonstrate that this vision sensor is able to detect and track features on a user-selected target from frame to frame and steer the small autonomous vehicle towards it. The sensor system utilizes hardware implementations of the rank transform for filtering, a Harris corner detector for feature detection, and a correlation algorithm for feature matching and tracking. With additional capabilities supported in software, the operational system communicates wirelessly with a base station, receiving commands, providing visual feedback to the user and allowing user input such as specifying targets to track. Since this vision sensor system uses reconfigurable hardware, other vision algorithms such as stereo vision and motion analysis can be implemented to reconfigure the system for other real-time vision applications.  相似文献   

2.
针对传统方法直接对CCD采集的图像进行后续信号处理在实时性上的不足,特别是在高像素CCD构成的视觉系统中这种不足尤为突出。提出了一种快速CCD视觉传感处理方法,该方法包括动态灰度处理、Sobel后置优化、边缘跟踪Hough变化和传感器快速标定,能有效降低算法复杂性,提高整个视觉传感系统的实时性。在白行设计的ARM9(S3C2410)CCD传感的移动机器人平台上运行,实验证明具有较好的实时性。  相似文献   

3.
用于移动机器人的视觉全局定位系统研究   总被引:6,自引:1,他引:5  
魏芳  董再励  孙茂相  王晓蕾 《机器人》2001,23(5):400-403
本文叙述了用于移动机器人自主导航定位的一种视觉全局定位系统技术.该视觉定 位系统由LED主动路标、全景视觉传感器和数据处理系统组成.本文主要介绍了为提高全景 视觉图像处理速度和环境信标识别可靠性、准确性的应用方法,并给出了实验结果.实验表 明,视觉定位是具有明显研究价值和应用前景的全局导航定位技术.  相似文献   

4.
梁潇  李原  梁自泽  侯增广  徐德  谭民 《机器人》2007,29(5):0-450
近年来视觉传感器在工业自动化和机器人导航领域得到越来越多的应用。本文提出了一种基于DSP微处理器的视觉传感器的设计与实现。视觉传感器采集环境图像并在DSP内核处理器执行图像处理算法,得到决策结果后直接输出给控制系统执行,从而避免了传输大量图像数据所需要的高带宽通讯通道的使用。开发的视觉传感器具有体积小、实时性能好、可扩展性强等特点,并且提供了常用的图像处理软件支持包。文中对系统的软件和硬件开发进行了详细阐述,最后在焊缝自动跟踪平台上的应用验证了传感器的实际整体性能可满足实际应用的需要。关于视觉传感器的下一步工作在最后进行了讨论。  相似文献   

5.
非接触式机器人直线轨迹测量系统   总被引:2,自引:0,他引:2  
郝颖明  董再励  刘百川  周静 《机器人》2002,24(5):394-398
视觉的、造价低廉实用的非接触式机器人直线轨迹测量系统.该测量系统由结构光视觉传感 器、测量轨道、主控计算机及相关软件组成.视觉传感器可固定于机器人末端.当机器人末 端带动视觉传感器沿测量轨道做直线运动时,通过传感器测量相对测量轨道的连续位姿关系 ,就可间接描述机器人末端的运动轨迹.当重复同一直线运动时,可检测出机器人的轨迹重 复性.本文综述了机器人直线轨迹测量设备的研究现状,介绍了轨迹检测系统的测量原 理,重点讨论了该测量系统的两个关键技术:空间特征点的图象提取技术和三维坐标计算方 法,并描述了该系统的结构、性能指标和测量试验结果.  相似文献   

6.
汤一平  姜荣剑  林璐璐 《计算机科学》2015,42(3):284-288, 315
针对现有的移动机器人视觉系统计算资源消耗大、实时性能欠佳、检测范围受限等问题,提出一种基于主动式全景视觉传感器(AODVS)的移动机器人障碍物检测方法。首先,将单视点的全方位视觉传感器(ODVS)和由配置在1个平面上的4个红色线激光组合而成的面激光发生器进行集成,通过主动全景视觉对移动机器人周边障碍物进行检测;其次,移动机器人中的全景智能感知模块根据面激光发生器投射到周边障碍物上的激光信息,通过视觉处理方法解析出移动机器人周边障碍物的距离和方位等信息;最后,基于上述信息采用一种全方位避障策略,实现移动机器人的快速避障。实验结果表明,基于AODVS的障碍物检测方法能在实现快速高效避障的同时,降低对移动机器人的计算资源的要求。  相似文献   

7.
基于图象分割的机器人视觉系统的设计与实现   总被引:3,自引:0,他引:3  
机器人视觉系统是自主机器人的重要组成部分,而如何精确高效的处理视觉信息是视觉系统的关键问题.本文介绍了一个包括离线颜色分析器和实时视觉信息处理器两大部分的机器人视觉系统。离线颜色分析器用于提取各种颜色的阈值,实时视觉信息处理器则利用阈值进行图象分割,从而使机器人准确认知当前环境。  相似文献   

8.
一种用于彩色图像目标识别的自适应阈值分割方法   总被引:1,自引:0,他引:1  
机器人视觉系统利用颜色、形状等信息来识别环境目标,但是难点在于识别的鲁棒性和实时性的保证。采用移动机器人做平台,提出一种基于颜色学习的实时目标识别系统,并提出了一种目标颜色学习和分割算法,该算法基于自适应阈值分割图像,考虑环境的光照变化进行调整,改善了系统的实时性和鲁棒性。  相似文献   

9.
本文针对机器人智能化、网络化的发展趋势,设计和优化了机器人操作系统整体架构,针对机器人运行中间件的实时操作系统内核进行了适配和优化,完成了机器人功能组件的封装,对机器人脚本的运行时环境进行了实现和优化,研发了兼具实时性、智能化和交互性好的机器人操作系统和可视化集成开发调试平台.  相似文献   

10.
在不稳定光照条件下足球机器人视觉系统的设计   总被引:1,自引:0,他引:1  
郭成果  熊蓉  褚健  盛宇 《机器人》2006,28(2):200-205
视觉系统是RoboCupF180小型组足球机器人系统感知外界环境的主要传感器,为了提高它的实时性、精确性和鲁棒性,提出了新的色标设计,根据历史信息来确定当前信息的可信度.用Tsai的方法对摄像机进行了标定和畸变矫正.提高了在不稳定的光照条件下的识别率以及机器人和球的定位精度.在实际比赛中验证了这个视觉子系统.  相似文献   

11.
In this paper, a nonlinear controller design for an omni-directional mobile robot is presented. The robot controller consists of an outer-loop (kinematics) controller and an inner-loop (dynamics) controller, which are both designed using the Trajectory Linearization Control (TLC) method based on a nonlinear robot dynamic model. The TLC controller design combines a nonlinear dynamic inversion and a linear time-varying regulator in a novel way, thereby achieving robust stability and performance along the trajectory without interpolating controller gains. A sensor fusion method, which combines the onboard sensor and the vision system data, is employed to provide accurate and reliable robot position and orientation measurements, thereby reducing the wheel slippage induced tracking error. A time-varying command filter is employed to reshape an abrupt command trajectory for control saturation avoidance. The real-time hardware-in-the-loop (HIL) test results show that with a set of fixed controller design parameters, the TLC robot controller is able to follow a large class of 3-degrees-of-freedom (3DOF) trajectory commands accurately.  相似文献   

12.
Abstract: A real-time visual servo tracking system for an industrial robot has been developed. Instead of a charge coupled device (CCD), a position sensitive detector (PSD) is used as the real-time vision sensor due to its fast response (the light position is transduced to analogue current). A neural network learns the complex association between the 3D object position and its sensor reading, and uses it to track that object, either moving or stationary. It also turns out that this scheme lends itself to a user-friendly way to teach workpaths for industrial robots. Furthermore, for real-time use of the neural net, an efficient neural network architecture has been developed based on the concept of input space partitioning and local learning. Real experiments indicate the system's characteristics of fast processing and learning as well as optimal usage of network resources.  相似文献   

13.
Automatic sensor placement for model-based robot vision.   总被引:2,自引:0,他引:2  
This paper presents a method for automatic sensor placement for model-based robot vision. In such a vision system, the sensor often needs to be moved from one pose to another around the object to observe all features of interest. This allows multiple three-dimensional (3-D) images to be taken from different vantage viewpoints. The task involves determination of the optimal sensor placements and a shortest path through these viewpoints. During the sensor planning, object features are resampled as individual points attached with surface normals. The optimal sensor placement graph is achieved by a genetic algorithm in which a min-max criterion is used for the evaluation. A shortest path is determined by Christofides algorithm. A Viewpoint Planner is developed to generate the sensor placement plan. It includes many functions, such as 3-D animation of the object geometry, sensor specification, initialization of the viewpoint number and their distribution, viewpoint evolution, shortest path computation, scene simulation of a specific viewpoint, parameter amendment. Experiments are also carried out on a real robot vision system to demonstrate the effectiveness of the proposed method.  相似文献   

14.
When a vision sensor is used in conjunction with a robot, hand-eye calibration is necessary to determine the accurate position of the sensor relative to the robot. This is necessary to allow data from the vision sensor to be defined in the robot's global coordinate system. For 2D laser line sensors hand-eye calibration is a challenging process because they only collect data in two dimensions. This leads to the use of complex calibration artefacts and requires multiple measurements be collected, using a range of robot positions. This paper presents a simple and robust hand-eye calibration strategy that requires minimal user interaction and makes use of a single planar calibration artefact. A significant benefit of the strategy is that it uses a low-cost, simple and easily manufactured artefact; however, the lower complexity can lead to lower variation in calibration data. In order to achieve a robust hand-eye calibration using this artefact, the impact of robot positioning strategies is considered to maintain variation. A theoretical basis for the necessary sources of input variation is defined by a mathematical analysis of the system of equations for the calibration process. From this, a novel strategy is specified to maximize data variation by using a circular array of target scan lines to define a full set of required robot positions. A simulation approach is used to further investigate and optimise the impact of robot position on the calibration process, and the resulting optimal robot positions are then experimentally validated for a real robot mounted laser line sensor. Using the proposed optimum method, a semi-automatic calibration process, which requires only four manually scanned lines, is defined and experimentally demonstrated.  相似文献   

15.
16.
This paper presents a way of implementing a model-based predictive controller (MBPC) for mobile robot navigation when unexpected static obstacles are present in the robot environment. The method uses a nonlinear model of mobile robot dynamics, and thus allows an accurate prediction of the future trajectories. An ultrasonic ranging system has been used for obstacle detection. A multilayer perceptron is used to implement the MBPC, allowing real-time implementation and also eliminating the need for high-level data sensor processing. The perceptron has been trained in a supervised manner to reproduce the MBPC behaviour. Experimental results obtained when applying the neural-network controller to a TRC Labmate mobile robot are given in the paper.  相似文献   

17.
胡庆茂  陈锦江 《机器人》1989,3(2):47-50
本文探讨了机器人视觉的固有矛盾,分析了机器人视觉方法的现状,得出:现行的视觉方法难于给出机器人操作必需的信息,难于兼顾实时性与通用性。基于此分析,构思了一种新的三维视觉系统,旨在解决机器人视觉的固有问题.  相似文献   

18.
A real-time visual servo tracking system for an industrial robot has been implemented using PSD (Position Sensitive Detector) cameras, neural networks, and an extended trapezoidal motion planning method. PSD and directly transduces the light's projected position on its sensor plane into an analog current and lends itself to fast real-time tracking. A neural network, after proper training, transforms the PSD sensor reading into a 3D position of the target, which is then input to an extended trapezoidal motion planning algorithm. This algorithm implements a continuous motion update strategy in response to an ever-changing sensor information from the moving target, while greatly reducing the tracking delay. This planning method is found to be very useful for sensor-based control such as moving target tracking or weld-seam tracking in which the robot needs to change its motion in real time in response to incoming sensor information. Further, for real-time usage of the neural net, a new architecture called LANN (Locally Activated Neural Network) has been developed based on the concept of CMAC input partitioning and local learning. Experimental evidence shows that an industrial robot can smoothly track a moving target of unknown motion with speeds of up to 1 m/s and with oscillation frequency up to 5 Hz.  相似文献   

19.
ABSTRACT

This paper presents a robust bin-picking system utilizing tactile sensors and a vision sensor. The object position and orientation are estimated using a fast template-matching method through the vision sensor. When a robot picks up an object, the tactile sensors detect the success or failure of the grasping, and a force sensor detects the contact with the environment. A weight sensor is also used to judge whether the lifting of the object has been successful. The robust and efficient bin-picking system presented herein is implemented through the integration of different sensors. In particular, the tactile sensors realize rope-shaped object picking that has yet to be made possible with conventional picking systems. The effectiveness of the proposed method was confirmed through grasping experiments and in a competitive event at the World Robot Challenge 2018.  相似文献   

20.
This paper describes a laser-based computer vision system used for automatic fruit recognition. It is based on an infrared laser range-finder sensor that provides range and reflectance images and is designed to detect spherical objects in non-structured environments. Image analysis algorithms integrate both range and reflectance information to generate four characteristic primitives which give evidence of the existence of spherical objects. The output of this vision system includes 3D position, radius and surface reflectivity of each spherical object. It has been applied to the AGRIBOT orange harvesting robot, where it has obtained good fruit detection rates and unlikely false detections.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号