首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Automatic motion detection features are able to enhance surveillance efficiency and quality. The aim of this research is to recognize and detect motion automatically around a robot's environment in order to equip a mobile robot for a surveillance task. The required information is based on the input obtained from a charge coupled device (CCD) camera mounted on the mobile robot. As the first step toward achieving the goal, it is necessary to have a stationary mobile robot and moving objects. Experiments in a different environment, such as different movements, size of moving objects, and lighting conditions, have also been conducted. The “adjacent pixels comparison” is the proposed method to detect motion in this experiment. The results have verified that the motion detection experiments operate as expected. This work was presented in part at the 11th International Symposium on Artificial Life and Robotics, Oita, Japan, January 23–25, 2006  相似文献   

2.
Uncalibrated obstacle detection using normal flow   总被引:2,自引:0,他引:2  
This paper addresses the problem of obstacle detection for mobile robots. The visual information provided by a single on-board camera is used as input. We assume that the robot is moving on a planar pavement, and any point lying outside this plane is treated as an obstacle. We address the problem of obstacle detection by exploiting the geometric arrangement between the robot, the camera, and the scene. During an initialization stage, we estimate an inverse perspective transformation that maps the image plane onto the horizontal plane. During normal operation, the normal flow is computed and inversely projected onto the horizontal plane. This simplifies the resultant flow pattern, and fast tests can be used to detect obstacles. A salient feature of our method is that only the normal flow information, or first order time-and-space image derivatives, is used, and thus we cope with the aperture problem. Another important issue is that, contrasting with other methods, the vehicle motion and intrinsic and extrinsic parameters of the camera need not be known or calibrated. Both translational and rotational motion can be dealt with. We present motion estimation results on synthetic and real-image data. A real-time version implemented on a mobile robot, is described.  相似文献   

3.
We have designed a mobile robot with a distribution structure for intelligent life space. The mobile robot was constructed using an aluminum frame. The mobile robot has the shape of a cylinder, and its diameter, height, and weight are 40 cm, 80 cm, and 40 kg, respectively. There are six systems in the mobile robot, including structure, an obstacle avoidance and driving system, a software development system, a detection module system, a remote supervision system, and others. In the obstacle avoidance and driving system, we use an NI motion control card to drive two DC servomotors in the mobile robot, and detect obstacles using a laser range finder and a laser positioning system. Finally, we control the mobile robot using an NI motion control card and a MAXON driver according to the programmed trajectory. The mobile robot can avoid obstacles using the laser range finder, and follow the programmed trajectory. We developed the user interface with four functions for the mobile robot. In the security system, we designed module-based security devices to detect dangerous events and transmit the detection results to the mobile robot using a wireless RF interface. The mobile robot can move to the event position using the laser positioning system.  相似文献   

4.
提出了一种新颖的基于两个特征点的室内移动机器人定位方法。与已有的几何位姿估计方法或航标匹配方法不同,该方法不需要人工航标,也不需要准确的环境地图,只需一幅由传统的CCD相机拍摄的图像。从机器人接近的目标上选取相对于地面等高的两个点作为两个特征点。基于这两点建立一个目标坐标系。在相机平视且这两个特征点与相机投影中心相对于地面不是恰好等高的条件下,就可以根据这两个特征点在图像中的坐标确定机器人相对于目标坐标系的位置和运动方向。该方法非常灵活,适用范围广,可以大大简化机器人定位问题。试验结果表明这一新的方法不仅简单灵活而且具有很高的定位精度。  相似文献   

5.
CCD摄像机标定   总被引:3,自引:0,他引:3  
在基于单目视觉的农业轮式移动机器人自主导航系统中,CCD摄像机标定是农业轮式移动机器人正确和安全导航的前提和关键。摄像机标定确立了地面某点的三维空间坐标与计算机图像二维坐标之间的对应关系,机器人根据该关系计算出车体位姿值自主导航。因此,根据CCD摄像机针孔成像模型,利用大地坐标系中平面模板上已知的各点坐标,建立与计算机图像空间中各对应像素值之间的关系方程组,在Matlab环境下拟合出摄像机各内外参数。实验结果表明:该方法可以正确完成CCD摄像机标定。  相似文献   

6.
Optimal representative blocks are proposed for an efficient tracking of a moving object and it is verified experimentally by using a mobile robot with a pan‐tilt camera. The key idea comes from the fact that when the image size of a moving object is shrunk in an image frame according to the distance between the camera of mobile robot and the moving object, the tracking performance of a moving object can be improved by shrinking the size of representative blocks according to the object image size. Motion estimation using edge detection (ED) and block‐matching algorithm (BMA) are often used in the case of moving object tracking by vision sensors. However, these methods often miss the real‐time vision data since these schemes suffer from the heavy computational load. To overcome this problem and to improve the tracking performance, the optimal representative block that can reduce a lot of data to be computed is defined and optimized by changing the size of the representative block according to the size of object in the image frame. The proposed algorithm is verified experimentally by using a mobile robot with a two degree‐of‐freedom active camera. © 2004 Wiley Periodicals, Inc.  相似文献   

7.
针对铸造类零件人工清理效率低、劳动强度大、成本高等问题,论文提出一种基于机器视觉的智能机器人清理系统。设计了由CCD摄像机、图像采集卡、输入输出单元、控制装置构成的机器人硬件系统,搭建了图像采集、处理分析、学习识别软件系统。通过对金属型铸造的盖板、导体类零件浇口、冒口等清理部位机械化自动化清理工作的多次实验验证,零件残留小于1mm,清理准确率达到97.6%,该系统所有指标达到设计要求,满足了工厂对于零部件毛刺智能化清理的要求。  相似文献   

8.
Our obstacle detection method is applicable to deliberative translation motion of a mobile robot and, in such motion, the epipole of each image of an image pair is coincident and termed the focus of expansion (FOE). We present an accurate method for computing the FOE and then we use this to apply a novel rectification to each image, called a reciprocal-polar (RP) rectification. When robot translation is parallel to the ground, as with a mobile robot, ground plane image motion in RP-space is a pure shift along an RP image scan line and hence can be recovered by a process of 1D correlation, even over large image displacements and without the need for corner matches. Furthermore, we show that the magnitude of these shifts follows a sinusoidal form along the second (orientation) dimension of the RP image. This gives the main result that ground plane motion over RP image space forms a 3D sinusoidal manifold. Simultaneous ground plane pixel grouping and recovery of the ground plane motion thus amounts to finding the FOE and then robustly fitting a 3D sinusoid to shifts of maximum correlation in RP space. The phase of the recovered sinusoid corresponds to the orientation of the vanishing line of the ground plane and the amplitude is related to the magnitude of the robot/camera translation. Recovered FOE, vanishing line and sinusoid amplitude fully define the ground plane motion (homography) across a pair of images and thus obstacles and ground plane can be segmented without any explicit knowledge of either camera parameters or camera motion.  相似文献   

9.
A vision-based scheme for object recognition and transport with a mobile robot is proposed in this paper. First, camera calibration is experimentally performed with Zhenyou Zhang’s method, and a distance measurement method with the monocular camera is presented and tested. Second, Kalman filtering algorithm is used to predict the movement of a target with HSI model as the input and the seed filling algorithm as the image segmentation approach. Finally, the motion control of the pan-tilt camera and mobile robot is designed to fulfill the tracking and transport task. The experiment results demonstrate the robust object recognition and fast tracking capabilities of the proposed scheme.  相似文献   

10.
The study presented here describes a novel vision-based motion detection system for telerobotic operations such as distant surgical procedures. The system uses a CCD camera and image processing to detect the motion of a master robot or operator. Colour tags are placed on the arm and head of a human operator to detect the up/down, right/left motion of the head as well as the right/left motion of the arm. The motion of the colour tags are used to actuate a slave robot or a remote system. The determination of the colour tags’ motion is achieved through image processing using eigenvectors and colour system morphology and the relative head, shoulder and wrist rotation angles through inverse dynamics and coordinate transformation. A program is used to transform this motion data into motor control commands and transmit them to a slave robot or remote system through wireless internet. The system performed well even in complex environments with errors that did not exceed 2?pixels with a response time of about 0.1?s. The results of the experiments are available at: http://www.youtube.com/watch?v=yFxLaVWE3f8 and http://www.youtube.com/watch?v=_nvRcOzlWHw  相似文献   

11.
The structural features inherent in the visual motion field of a mobile robot contain useful clues about its navigation. The combination of these visual clues and additional inertial sensor information may allow reliable detection of the navigation direction for a mobile robot and also the independent motion that might be present in the 3D scene. The motion field, which is the 2D projection of the 3D scene variations induced by the camera‐robot system, is estimated through optical flow calculations. The singular points of the global optical flow field of omnidirectional image sequences indicate the translational direction of the robot as well as the deviation from its planned path. It is also possible to detect motion patterns of near obstacles or independently moving objects of the scene. In this paper, we introduce the analysis of the intrinsic features of the omnidirectional motion fields, in combination with gyroscopical information, and give some examples of this preliminary analysis. © 2004 Wiley Periodicals, Inc.  相似文献   

12.
A reactive navigation system for an autonomous mobile robot in unstructured dynamic environments is presented. The motion of moving obstacles is estimated for robot motion planning and obstacle avoidance. A multisensor-based obstacle predictor is utilized to obtain obstacle-motion information. Sensory data from a CCD camera and multiple ultrasonic range finders are combined to predict obstacle positions at the next sampling instant. A neural network, which is trained off-line, provides the desired prediction on-line in real time. The predicted obstacle configuration is employed by the proposed virtual force based navigation method to prevent collision with moving obstacles. Simulation results are presented to verify the effectiveness of the proposed navigation system in an environment with multiple mobile robots or moving objects. This system was implemented and tested on an experimental mobile robot at our laboratory. Navigation results in real environment are presented and analyzed.  相似文献   

13.
机器视觉与机器人的结合是未来机器人行业发展的一大趋势。在移动机器人的避障导航方案中,使用传统的传感器存在诸多问题,且获取的信息有限。提出一种基于单目视觉的移动机器人导航算法,在算法应用中,如果使用镜头焦距已知的相机,则无需对相机标定。为降低光照对障碍物边缘检测的影响,将机器人拍摄的彩色图像转换到HSI空间。采用canny算法对转换后的分量分别进行边缘检测,并合成检测结果。通过阈值处理过滤合成边缘,去除弱边缘信息,提高检测准确度。采用形态学处理连接杂散边缘,通过区域生长得到非障碍区域,并由几何关系建立图像坐标系与机器人坐标系之间的映射关系。利用结合隶属度函数的模糊逻辑得出机器人控制参数。实验结果表明,对图像颜色空间的转换降低了地面反光、阴影的影响,算法能有效排除地面条纹等的干扰并准确检测出障碍物边缘,而模糊逻辑决策方法提高了算法的鲁棒性和结果的可靠性。  相似文献   

14.
An environmental camera is a camera embedded in a working environment to provide vision guidance to a mobile robot. In the setup of such robot systems, the relative position and orientation between the mobile robot and the environmental camera are parameters that must unavoidably be calibrated. Traditionally, because the configuration of the robot system is task-driven, these kinds of external parameters of the camera are measured separately and should be measured each time a task is to be performed. In this paper, a method is proposed for the robot system in which calibration of the environmental camera is rendered by the robot system itself on the spot after a system is set up. Specific kinds of motion patterns of the mobile robot, which are called test motions, have been explored for calibration. The calibration approach is based upon executing certain selected test motions on the mobile robot and then using the camera to observe the robot. According to a comparison of odometry and sensing data, the external parameters of the camera can be calibrated. Furthermore, an evaluation index (virtual sensing error) has been developed for the selection and optimization of test motions to obtain good calibration performance. All the test motion patterns are computed offline in advance and saved in a database, which greatly shorten the calibration time. Simulations and experiments verified the effectiveness of the proposed method.  相似文献   

15.
A vision-based motion detection system is designed to remotely control the motion of a service robot. Color tags are placed on the operator to control the robot motion. The motion of the color tags is detected using a CCD camera and used to actuate the remote robot wirelessly. The computation of the color tags’ motion is achieved through image processing using eigenvectors and color system morphology. Through inverse dynamics and coordinate transformation, the rotation angles of the arm, head, and foot of the operator and the corresponding robot motor angles are determined. It takes, on average, 65 ms per calculation. The system performed well even in complex environments with errors that did not exceed 2 pixels with a response time of about 0.1 s. The results of the experiments are available at http://www.youtube.com/watch?v=5TC0jqlRe1U and http://www.youtube.com/watch?v=3sJvjXYgwVo. The videos have to be watched simultaneously in order to observe the command and the corresponding robot response.  相似文献   

16.
基于灰度形态学的图像处理在焊缝识别中的应用   总被引:3,自引:0,他引:3  
焊缝轨迹的提取是自动化焊接的关键技术,它不仅要求焊缝提取的准确性,而且要求实时性。论文采用爬行机器人CCD获取的焊缝原始图像,充分利用其灰度分布信息,提出了一种基于灰度形态学的图像处理算法。该算法能针对不同的焊缝,通过灰度形态学滤波等处理,提取焊缝的特征。实验结果表明该算法能满足实时、准确性要求。  相似文献   

17.
For the aging population, surveillance in household environments has become more and more important. In this paper, we present a household robot that can detect abnormal events by utilizing video and audio information. In our approach, moving targets can be detected by the robot using a passive acoustic location device. The robot then tracks the targets by employing a particle filter algorithm. To adapt to different lighting conditions, the target model is updated regularly based on an update mechanism. To ensure robust tracking, the robot detects abnormal human behavior by tracking the upper body of a person. For audio surveillance, Mel frequency cepstral coefficients (MFCC) is used to extract features from audio information. Those features are input to a support vector machine classifier for analysis. Experimental results show that the robot can detect abnormal behavior such as “falling down” and “running”. Also, a 88.17% accuracy rate is achieved in the detection of abnormal audio information like “crying”, “groan”, and “gun shooting”. To lower the false alarms by abnormal sound detection system, the passive acoustic location device directs the robot to the scene where abnormal events occur and the robot can employ its camera to further confirm the occurrence of the events. At last, the robot will send the image captured by the robot to the mobile phone of master.  相似文献   

18.
This article describes a vision-based auto-recharging system that guides a mobile robot moving toward a docking station. The system contains a docking station and a mobile robot. The docking station contains a docking structure, a control device, a charger, a safety detection device, and a wireless RF interface. The mobile robot contains a power detection module (voltage and current), an auto-switch, a wireless RF interface, a controller, and a camera. The controller of the power detection module is a Holtek chip. The docking structure is designed with one active degree of freedom and two passive degrees of freedom. For image processing, the mobile robot uses a webcam to capture a real-time image. The image signal is transmitted to the controller of the mobile robot via a USB interface. We use an Otsu algorithm to calculate the distance and orientation of the docking station from the mobile robot. In the experiment, the proposed algorithm guided the mobile robot to the docking station.  相似文献   

19.
Visual tracking of a moving target using active contour based SSD algorithm   总被引:1,自引:0,他引:1  
This paper presents a new image based visual tracking scheme for a mobile robot to trace a moving target using a single camera mounted on the mobile robot. To accurately estimate the position of the target in the next image, it decomposes the effect of the camera motion on the velocity vector of the target in the image frame. Based on the estimated velocity of the target and the image Jacobian, the control inputs of the mobile robot are determined in such a way that the target may appear inside the central area of the image frame. Since the shape of the target in the image frame varies due to rotation and translation of the target, a new shape adaptive Sum-of-Squared Difference (SSD) algorithm is proposed which uses the extended snake algorithm to extract the contour of the target and updates the template in every step of the matching process. The proposed scheme has been implemented using a Nomad Scout Robot II. The experimental results have shown that the proposed scheme follows the target within a negligible error range even when the target is temporarily lost due to various reasons.  相似文献   

20.
《Advanced Robotics》2013,27(3):261-272
This paper proposes an efficient position identification method for mobile robots in the environment of a building corridor using colour images and map information. A robot can usually estimate its position from its motion history (so-called dead reckoning); however, there are occasions when the robot's position needs to be estimated without the motion history, for example when self-tracking of the motion has failed, or just after the robot power is on. The proposed method is to identify the robot's position without the motion history. The method consists of the following three steps: (1) map information for the mobile robot is prepared; (2) a colour image is processed to detect a vanishing point and to generate an abstracted image. The robot moves to an appropriate position for the identification, if it is unable to identify the current position; and (3) the current robot position is identified from the map information, the vanishing point, and the abstracted image. The effectiveness of the proposed method is shown by the experimental results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号