共查询到20条相似文献,搜索用时 62 毫秒
1.
This paper describes an on-board vision sensor system that is developed specifically for small unmanned vehicle applications.
For small vehicles, vision sensors have many advantages, including size, weight, and power consumption, over other sensors
such as radar, sonar, and laser range finder, etc. A vision sensor is also uniquely suited for tasks such as target tracking
and recognition that require visual information processing. However, it is difficult to meet the computing needs of real-time
vision processing on a small robot. In this paper, we present the development of a field programmable gate array-based vision
sensor and use a small ground vehicle to demonstrate that this vision sensor is able to detect and track features on a user-selected
target from frame to frame and steer the small autonomous vehicle towards it. The sensor system utilizes hardware implementations
of the rank transform for filtering, a Harris corner detector for feature detection, and a correlation algorithm for feature
matching and tracking. With additional capabilities supported in software, the operational system communicates wirelessly
with a base station, receiving commands, providing visual feedback to the user and allowing user input such as specifying
targets to track. Since this vision sensor system uses reconfigurable hardware, other vision algorithms such as stereo vision
and motion analysis can be implemented to reconfigure the system for other real-time vision applications. 相似文献
2.
3.
4.
近年来视觉传感器在工业自动化和机器人导航领域得到越来越多的应用。本文提出了一种基于DSP微处理器的视觉传感器的设计与实现。视觉传感器采集环境图像并在DSP内核处理器执行图像处理算法,得到决策结果后直接输出给控制系统执行,从而避免了传输大量图像数据所需要的高带宽通讯通道的使用。开发的视觉传感器具有体积小、实时性能好、可扩展性强等特点,并且提供了常用的图像处理软件支持包。文中对系统的软件和硬件开发进行了详细阐述,最后在焊缝自动跟踪平台上的应用验证了传感器的实际整体性能可满足实际应用的需要。关于视觉传感器的下一步工作在最后进行了讨论。 相似文献
5.
非接触式机器人直线轨迹测量系统 总被引:2,自引:0,他引:2
视觉的、造价低廉实用的非接触式机器人直线轨迹测量系统.该测量系统由结构光视觉传感
器、测量轨道、主控计算机及相关软件组成.视觉传感器可固定于机器人末端.当机器人末
端带动视觉传感器沿测量轨道做直线运动时,通过传感器测量相对测量轨道的连续位姿关系
,就可间接描述机器人末端的运动轨迹.当重复同一直线运动时,可检测出机器人的轨迹重
复性.本文综述了机器人直线轨迹测量设备的研究现状,介绍了轨迹检测系统的测量原
理,重点讨论了该测量系统的两个关键技术:空间特征点的图象提取技术和三维坐标计算方
法,并描述了该系统的结构、性能指标和测量试验结果. 相似文献
6.
针对现有的移动机器人视觉系统计算资源消耗大、实时性能欠佳、检测范围受限等问题,提出一种基于主动式全景视觉传感器(AODVS)的移动机器人障碍物检测方法。首先,将单视点的全方位视觉传感器(ODVS)和由配置在1个平面上的4个红色线激光组合而成的面激光发生器进行集成,通过主动全景视觉对移动机器人周边障碍物进行检测;其次,移动机器人中的全景智能感知模块根据面激光发生器投射到周边障碍物上的激光信息,通过视觉处理方法解析出移动机器人周边障碍物的距离和方位等信息;最后,基于上述信息采用一种全方位避障策略,实现移动机器人的快速避障。实验结果表明,基于AODVS的障碍物检测方法能在实现快速高效避障的同时,降低对移动机器人的计算资源的要求。 相似文献
7.
基于图象分割的机器人视觉系统的设计与实现 总被引:3,自引:0,他引:3
机器人视觉系统是自主机器人的重要组成部分,而如何精确高效的处理视觉信息是视觉系统的关键问题.本文介绍了一个包括离线颜色分析器和实时视觉信息处理器两大部分的机器人视觉系统。离线颜色分析器用于提取各种颜色的阈值,实时视觉信息处理器则利用阈值进行图象分割,从而使机器人准确认知当前环境。 相似文献
8.
一种用于彩色图像目标识别的自适应阈值分割方法 总被引:1,自引:0,他引:1
机器人视觉系统利用颜色、形状等信息来识别环境目标,但是难点在于识别的鲁棒性和实时性的保证。采用移动机器人做平台,提出一种基于颜色学习的实时目标识别系统,并提出了一种目标颜色学习和分割算法,该算法基于自适应阈值分割图像,考虑环境的光照变化进行调整,改善了系统的实时性和鲁棒性。 相似文献
9.
本文针对机器人智能化、网络化的发展趋势,设计和优化了机器人操作系统整体架构,针对机器人运行中间件的实时操作系统内核进行了适配和优化,完成了机器人功能组件的封装,对机器人脚本的运行时环境进行了实现和优化,研发了兼具实时性、智能化和交互性好的机器人操作系统和可视化集成开发调试平台. 相似文献
10.
11.
In this paper, a nonlinear controller design for an omni-directional mobile robot is presented. The robot controller consists of an outer-loop (kinematics) controller and an inner-loop (dynamics) controller, which are both designed using the Trajectory Linearization Control (TLC) method based on a nonlinear robot dynamic model. The TLC controller design combines a nonlinear dynamic inversion and a linear time-varying regulator in a novel way, thereby achieving robust stability and performance along the trajectory without interpolating controller gains. A sensor fusion method, which combines the onboard sensor and the vision system data, is employed to provide accurate and reliable robot position and orientation measurements, thereby reducing the wheel slippage induced tracking error. A time-varying command filter is employed to reshape an abrupt command trajectory for control saturation avoidance. The real-time hardware-in-the-loop (HIL) test results show that with a set of fixed controller design parameters, the TLC robot controller is able to follow a large class of 3-degrees-of-freedom (3DOF) trajectory commands accurately. 相似文献
12.
Abstract: A real-time visual servo tracking system for an industrial robot has been developed. Instead of a charge coupled device (CCD), a position sensitive detector (PSD) is used as the real-time vision sensor due to its fast response (the light position is transduced to analogue current). A neural network learns the complex association between the 3D object position and its sensor reading, and uses it to track that object, either moving or stationary. It also turns out that this scheme lends itself to a user-friendly way to teach workpaths for industrial robots. Furthermore, for real-time use of the neural net, an efficient neural network architecture has been developed based on the concept of input space partitioning and local learning. Real experiments indicate the system's characteristics of fast processing and learning as well as optimal usage of network resources. 相似文献
13.
Automatic sensor placement for model-based robot vision. 总被引:2,自引:0,他引:2
S Y Chen Y F Li 《IEEE transactions on systems, man, and cybernetics. Part B, Cybernetics》2004,34(1):393-408
This paper presents a method for automatic sensor placement for model-based robot vision. In such a vision system, the sensor often needs to be moved from one pose to another around the object to observe all features of interest. This allows multiple three-dimensional (3-D) images to be taken from different vantage viewpoints. The task involves determination of the optimal sensor placements and a shortest path through these viewpoints. During the sensor planning, object features are resampled as individual points attached with surface normals. The optimal sensor placement graph is achieved by a genetic algorithm in which a min-max criterion is used for the evaluation. A shortest path is determined by Christofides algorithm. A Viewpoint Planner is developed to generate the sensor placement plan. It includes many functions, such as 3-D animation of the object geometry, sensor specification, initialization of the viewpoint number and their distribution, viewpoint evolution, shortest path computation, scene simulation of a specific viewpoint, parameter amendment. Experiments are also carried out on a real robot vision system to demonstrate the effectiveness of the proposed method. 相似文献
14.
When a vision sensor is used in conjunction with a robot, hand-eye calibration is necessary to determine the accurate position of the sensor relative to the robot. This is necessary to allow data from the vision sensor to be defined in the robot's global coordinate system. For 2D laser line sensors hand-eye calibration is a challenging process because they only collect data in two dimensions. This leads to the use of complex calibration artefacts and requires multiple measurements be collected, using a range of robot positions. This paper presents a simple and robust hand-eye calibration strategy that requires minimal user interaction and makes use of a single planar calibration artefact. A significant benefit of the strategy is that it uses a low-cost, simple and easily manufactured artefact; however, the lower complexity can lead to lower variation in calibration data. In order to achieve a robust hand-eye calibration using this artefact, the impact of robot positioning strategies is considered to maintain variation. A theoretical basis for the necessary sources of input variation is defined by a mathematical analysis of the system of equations for the calibration process. From this, a novel strategy is specified to maximize data variation by using a circular array of target scan lines to define a full set of required robot positions. A simulation approach is used to further investigate and optimise the impact of robot position on the calibration process, and the resulting optimal robot positions are then experimentally validated for a real robot mounted laser line sensor. Using the proposed optimum method, a semi-automatic calibration process, which requires only four manually scanned lines, is defined and experimentally demonstrated. 相似文献
15.
16.
Mobile robot navigation in a partially structured static environment, using neural predictive control 总被引:3,自引:0,他引:3
This paper presents a way of implementing a model-based predictive controller (MBPC) for mobile robot navigation when unexpected static obstacles are present in the robot environment. The method uses a nonlinear model of mobile robot dynamics, and thus allows an accurate prediction of the future trajectories. An ultrasonic ranging system has been used for obstacle detection. A multilayer perceptron is used to implement the MBPC, allowing real-time implementation and also eliminating the need for high-level data sensor processing. The perceptron has been trained in a supervised manner to reproduce the MBPC behaviour. Experimental results obtained when applying the neural-network controller to a TRC Labmate mobile robot are given in the paper. 相似文献
17.
本文探讨了机器人视觉的固有矛盾,分析了机器人视觉方法的现状,得出:现行的视觉方法难于给出机器人操作必需的信息,难于兼顾实时性与通用性。基于此分析,构思了一种新的三维视觉系统,旨在解决机器人视觉的固有问题. 相似文献
18.
Real-Time Dynamic Visual Tracking Using PSD Sensors and Extended Trapezoidal Motion Planning 总被引:3,自引:0,他引:3
A real-time visual servo tracking system for an industrial robot has been implemented using PSD (Position Sensitive Detector) cameras, neural networks, and an extended trapezoidal motion planning method. PSD and directly transduces the light's projected position on its sensor plane into an analog current and lends itself to fast real-time tracking. A neural network, after proper training, transforms the PSD sensor reading into a 3D position of the target, which is then input to an extended trapezoidal motion planning algorithm. This algorithm implements a continuous motion update strategy in response to an ever-changing sensor information from the moving target, while greatly reducing the tracking delay. This planning method is found to be very useful for sensor-based control such as moving target tracking or weld-seam tracking in which the robot needs to change its motion in real time in response to incoming sensor information. Further, for real-time usage of the neural net, a new architecture called LANN (Locally Activated Neural Network) has been developed based on the concept of CMAC input partitioning and local learning. Experimental evidence shows that an industrial robot can smoothly track a moving target of unknown motion with speeds of up to 1 m/s and with oscillation frequency up to 5 Hz. 相似文献
19.
Sho Tajima Seiji Wakamatsu Taiki Abe Masanari Tennomi Koki Morita Hirotoshi Ubata 《Advanced Robotics》2020,34(7-8):439-453
ABSTRACTThis paper presents a robust bin-picking system utilizing tactile sensors and a vision sensor. The object position and orientation are estimated using a fast template-matching method through the vision sensor. When a robot picks up an object, the tactile sensors detect the success or failure of the grasping, and a force sensor detects the contact with the environment. A weight sensor is also used to judge whether the lifting of the object has been successful. The robust and efficient bin-picking system presented herein is implemented through the integration of different sensors. In particular, the tactile sensors realize rope-shaped object picking that has yet to be made possible with conventional picking systems. The effectiveness of the proposed method was confirmed through grasping experiments and in a competitive event at the World Robot Challenge 2018. 相似文献
20.
This paper describes a laser-based computer vision system used for automatic fruit recognition. It is based on an infrared
laser range-finder sensor that provides range and reflectance images and is designed to detect spherical objects in non-structured
environments. Image analysis algorithms integrate both range and reflectance information to generate four characteristic primitives
which give evidence of the existence of spherical objects. The output of this vision system includes 3D position, radius and
surface reflectivity of each spherical object. It has been applied to the AGRIBOT orange harvesting robot, where it has obtained
good fruit detection rates and unlikely false detections. 相似文献