首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
《Advanced Robotics》2013,27(8-9):947-967
Abstract

A wide field of view is required for many robotic vision tasks. Such an aperture may be acquired by a fisheye camera, which provides a full image compared to catadioptric visual sensors, and does not increase the size and the weakness of the imaging system with respect to perspective cameras. While a unified model exists for all central catadioptric systems, many different models, approximating the radial distortions, exist for fisheye cameras. It is shown in this paper that the unified projection model proposed for central catadioptric cameras is also valid for fisheye cameras in the context of robotic applications. This model consists of a projection onto a virtual unitary sphere followed by a perspective projection onto an image plane. This model is shown equivalent to almost all the fisheye models. Calibration with four cameras and partial Euclidean reconstruction are done using this model, and lead to persuasive results. Finally, an application to a mobile robot navigation task is proposed and correctly executed along a 200-m trajectory.  相似文献   

2.
视觉环境感知在自动驾驶汽车发展中起着关键作用,在智能后视镜、倒车雷达、360°全景、行车记录仪、碰撞预警、红绿灯识别、车道偏移、并线辅助和自动泊车等领域也有着广泛运用。传统的环境信息获取方式是窄角针孔摄像头,视野有限有盲区,解决这个问题的方法是环境信息感知使用鱼眼镜头,广角视图能够提供整个180°的半球视图,理论上仅需两个摄像头即可覆盖360°,为视觉感知提供更多信息。处理环视图像目前主要有两种途径:一是对图像先纠正,去失真,缺点是图像去失真会损害图像质量,并导致信息丢失;二是直接对形变的鱼眼图像进行建模,但目前还没有效果比较好的建模方法。此外,环视鱼眼图像数据集的缺乏也是制约相关研究的一大难题。针对上述挑战,本文总结了环视鱼眼图像的相关研究,包括环视鱼眼图像的校正处理、环视鱼眼图像中的目标检测、环视鱼眼图像中的语义分割、伪环视鱼眼图像数据集生成方法和其他鱼眼图像建模方法等,结合自动驾驶汽车的环境感知应用背景,分析了这些模型的效率和这些处理方法的优劣,并对目前公开的环视鱼眼图像通用数据集进行了详细介绍,对环视鱼眼图像中待解决的问题与未来研究方向做出预测和展望。  相似文献   

3.
2D visual servoing consists in using data provided by a vision sensor for controlling the motions of a dynamic system. Most of visual servoing approaches has relied on the geometric features that have to be tracked and matched in the image acquired by the camera. Recent works have highlighted the interest of taking into account the photometric information of the entire image. This approach was tackled with images of perspective cameras. We propose, in this paper, to extend this technique to central cameras. This generalization allows to apply this kind of method to catadioptric cameras and wide field of view cameras. Several experiments have been successfully done with a fisheye camera in order to control a 6 degrees of freedom robot and with a catadioptric camera for a mobile robot navigation task.  相似文献   

4.
The ability to learn a map of the environment is important for numerous types of robotic vehicles. In this paper, we address the problem of learning a visual map of the ground using flying vehicles. We assume that the vehicles are equipped with one or two low-cost downlooking cameras in combination with an attitude sensor. Our approach is able to construct a visual map that can later on be used for navigation. Key advantages of our approach are that it is comparably easy to implement, can robustly deal with noisy camera images, and can operate either with a monocular camera or a stereo camera system. Our technique uses visual features and estimates the correspondences between features using a variant of the progressive sample consensus (PROSAC) algorithm. This allows our approach to extract spatial constraints between camera poses that can then be used to address the simultaneous localization and mapping (SLAM) problem by applying graph methods. Furthermore, we address the problem of efficiently identifying loop closures. We performed several experiments with flying vehicles that demonstrate that our method is able to construct maps of large outdoor and indoor environments.   相似文献   

5.
《Advanced Robotics》2013,27(8-9):843-860
Abstract

This paper proposes a path planning visual servoing strategy for a class of cameras that includes conventional perspective cameras, fisheye cameras and catadioptric cameras as special cases. Specifically, these cameras are modeled by adopting a unified model recently proposed in the literature and the strategy consists of designing image trajectories for eye-in-hand robotic systems that allow the robot to reach a desired location while satisfying typical visual servoing constraints. To this end, the proposed strategy introduces the projection of the available image features onto a virtual plane and the computation of a feasible image trajectory through polynomial programming. Then, the computed image trajectory is tracked by using an image-based visual servoing controller. Experimental results with a fisheye camera mounted on a 6-d.o.f. robot arm are presented in order to illustrate the proposed strategy.  相似文献   

6.
In this paper, we present a generic, modular bundle adjustment method for pose estimation, simultaneous self-calibration and reconstruction for multi-camera systems. In contrast to other approaches that use bearing vectors (camera rays) as observations, we extend the common collinearity equations with a general camera model and include the relative orientation of each camera w.r.t to the fixed multi-camera system frame yielding the extended collinearity equations that directly express all image observations as functions of all unknowns. Hence, we can either calibrate the camera system, the cameras, reconstruct the observed scene, and/or simply estimate the pose of the system by including the corresponding parameter block into the Jacobian matrix. Apart from evaluating the implementation with comprehensive simulations, we benchmark our method against recently published methods for pose estimation and bundle adjustment for multi-camera systems. Finally, all methods are evaluated using a 6 degree of freedom ground truth data set, that was recorded with a lasertracker.  相似文献   

7.
本文提出一种新的智能小车主动及被动控制手段,采用STC89C51RC与K66双芯片实现对智能小车的控制.运用蓝牙通信技术实现通过手机端APP控制小车进行基本动作,同时利用超声波测距技术实现小车自动避障.此外,还加入了红外探测传感器以实现小车的自动循迹,结合低功耗的MT9V032摄像头,利用图像识别技术实现了信标灯寻的....  相似文献   

8.
Omnidirectional cameras that give a 360° panoramic view of the surroundings have recently been used in many applications such as robotics, navigation, and surveillance. This paper describes the application of parametric ego-motion estimation for vehicle detection to perform surround analysis using an automobile-mounted camera. For this purpose, the parametric planar motion model is integrated with the transformations to compensate distortion in omnidirectional images. The framework is used to detect objects with independent motion or height above the road. Camera calibration as well as the approximate vehicle speed obtained from a CAN bus are integrated with the motion information from spatial and temporal gradients using a Bayesian approach. The approach is tested for various configurations of an automobile-mounted omni camera as well as a rectilinear camera. Successful detection and tracking of moving vehicles and generation of a surround map are demonstrated for application to intelligent driver support.Received: 1 August 2003, Accepted: 8 July 2004, Published online: 3 February 2005  相似文献   

9.
We have developed a technology for a robot that uses an indoor navigation system based on visual methods to provide the required autonomy. For robots to run autonomously, it is extremely important that they are able to recognize the surrounding environment and their current location. Because it was not necessary to use plural external world sensors, we built a navigation system in our test environment that reduced the burden of information processing mainly by using sight information from a monocular camera. In addition, we used only natural landmarks such as walls, because we assumed that the environment was a human one. In this article we discuss and explain two modules: a self-position recognition system and an obstacle recognition system. In both systems, the recognition is based on image processing of the sight information provided by the robot’s camera. In addition, in order to provide autonomy for the robot, we use an encoder and information from a two-dimensional space map given beforehand. Here, we explain the navigation system that integrates these two modules. We applied this system to a robot in an indoor environment and evaluated its performance, and in a discussion of our experimental results we consider the resulting problems.  相似文献   

10.
This paper proposes a novel multi-object detection method using multiple cameras. Unlike conventional multi-camera object detection methods, our method detects multiple objects using a linear camera array. The array can stream different views of the environment and can be easily reconfigured for a scene compared with the overhead surround configuration. Using the proposed method, the synthesized results can provide not only views of significantly occluded objects but also the ability of focusing on the target while blurring objects that are not of interest. Our method does not need to reconstruct the 3D structure of the scene, can accommodate dynamic background, is able to detect objects at any depth using a new synthetic aperture imaging method based on a simple shift transformation, and can see through occluders. The experimental results show that the proposed method has a good performance and can synthesize objects located within any designated depth interval with much better clarity than that using an existing method. To our best knowledge, it is the first time that such a method using synthetic aperture imaging has been proposed and developed for multi-object detection in a complex scene with a significant occlusion at different depths.  相似文献   

11.
A versatile General Camera Model, GCM, has been developed, and is described in detail. The model is general in the sense that it can capture both fisheye and conventional as well as catadioptric cameras in a unified framework. The camera model includes efficient handling of non-central cameras as well as compensations for decentring distortion. A novel way of analysing radial distortion functions of camera models leads to a straightforward improvement of conventional models with respect to generality, accuracy and simplicity. Different camera models are experimentally compared for two cameras with conventional and fisheye lenses, and the results show that the overall performance is favourable for the GCM.  相似文献   

12.
Safety is undoubtedly the most fundamental requirement for any aerial robotic application. It is essential to equip aerial robots with omnidirectional perception coverage to ensure safe navigation in complex environments. In this paper, we present a light‐weight and low‐cost omnidirectional perception system, which consists of two ultrawide field‐of‐view (FOV) fisheye cameras and a low‐cost inertial measurement unit (IMU). The goal of the system is to achieve spherical omnidirectional sensing coverage with the minimum sensor suite. The two fisheye cameras are mounted rigidly facing upward and downward directions and provide omnidirectional perception coverage: 360° FOV horizontally, 50° FOV vertically for stereo, and whole spherical for monocular. We present a novel optimization‐based dual‐fisheye visual‐inertial state estimator to provide highly accurate state‐estimation. Real‐time omnidirectional three‐dimensional (3D) mapping is combined with stereo‐based depth perception for the horizontal direction and monocular depth perception for upward and downward directions. The omnidirectional perception system is integrated with online trajectory planners to achieve closed‐loop, fully autonomous navigation. All computations are done onboard on a heterogeneous computing suite. Extensive experimental results are presented to validate individual modules as well as the overall system in both indoor and outdoor environments.  相似文献   

13.
This paper presents the control of an indoor unmanned aerial vehicle (UAV) using multi-camera visual feedback. For the autonomous flight of the indoor UAV, instead of using onboard sensor information, visual feedback concept is employed by the development of an indoor flight test-bed. The indoor test-bed consists of four major components: the multi-camera system, ground computer, onboard color marker set, and quad-rotor UAV. Since the onboard markers are attached to the pre-defined location, position and attitude of the UAV can be estimated by marker detection algorithm and triangulation method. Additionally, this study introduces a filter algorithm to obtain the full 6-degree of freedom (DOF) pose estimation including velocities and angular rates. The filter algorithm also enhances the performance of the vision system by making up for the weakness of low cost cameras such as poor resolution and large noise. Moreover, for the pose estimation of multiple vehicles, data association algorithm using the geometric relation between cameras is proposed in this paper. The control system is designed based on the classical proportional-integral-derivative (PID) control, which uses the position, velocity and attitude from the vision system and the angular rate from the rate gyro sensor. This paper concludes with both ground and flight test results illustrating the performance and properties of the proposed indoor flight test-bed and the control system using the multi-camera visual feedback.  相似文献   

14.
In this paper, we present a feature-based approach for monocular scene reconstruction based on Extended Kalman Filters (EKF). Our method processes a sequence of images taken by a single camera mounted frontally on a mobile robot. Using a combination of various techniques, we are able to produce a precise reconstruction that is free from outliers and can therefore be used for reliable obstacle detection and 3D map building. Furthermore, we present an attention-driven method that focuses the feature selection to image areas where the obstacle situation is unclear and where a more detailed scene reconstruction is necessary. In extensive real-world field tests we show that the presented approach is able to detect obstacles that are not seen by other sensors, such as laser range finders. Furthermore, we show that visual obstacle detection combined with a laser range finder can increase the detection rate of obstacles considerably, allowing the autonomous use of mobile robots in complex public and home environments.  相似文献   

15.

The accuracy and performance of deep neural network models become important issues as the applications of deep learning increase. For example, the navigation system of autonomous self-driving vehicles requires very accurate deep learning models. If a self-driving car fails to detect a pedestrian in bad weather, the result can be devastating. If we can increase the model accuracy by increasing the training data, the probability of avoiding such scenarios increases significantly. However, the problem of privacy for consumers and lack of enthusiasm for sharing their personal data, e.g., the recordings of their self-driving car, is an obstacle for using this valuable data. In Blockchain technology, many entities which cannot trust each other in normal conditions can join together to achieve a mutual goal. In this paper, a secure decentralized peer-to-peer framework for training the deep neural network models based on the distributed ledger technology in Blockchain ecosystem is proposed. The proposed framework anonymizes the identity of data providers and therefore can be used as an incentive for consumers to share their private data for training deep learning models. The proposed framework uses the Stellar Blockchain infrastructure for secure decentralized training of the deep models. A deep learning coin is proposed for Blockchain compensation.

  相似文献   

16.
Visual navigation is a challenging issue in automated robot control. In many robot applications, like object manipulation in hazardous environments or autonomous locomotion, it is necessary to automatically detect and avoid obstacles while planning a safe trajectory. In this context the detection of corridors of free space along the robot trajectory is a very important capability which requires nontrivial visual processing. In most cases it is possible to take advantage of the active control of the cameras. In this paper we propose a cooperative schema in which motion and stereo vision are used to infer scene structure and determine free space areas. Binocular disparity, computed on several stereo images over time, is combined with optical flow from the same sequence to obtain a relative-depth map of the scene. Both the time to impact and depth scaled by the distance of the camera from the fixation point in space are considered as good, relative measurements which are based on the viewer, but centered on the environment. The need for calibrated parameters is considerably reduced by using an active control strategy. The cameras track a point in space independently of the robot motion and the full rotation of the head, which includes the unknown robot motion, is derived from binocular image data. The feasibility of the approach in real robotic applications is demonstrated by several experiments performed on real image data acquired from an autonomous vehicle and a prototype camera head  相似文献   

17.
Most state-of-the-art robotic cars’ perception systems are quite different from the way a human driver understands traffic environments. First, humans assimilate information from the traffic scene mainly through visual perception, while the machine perception of traffic environments needs to fuse information from several different kinds of sensors to meet safety-critical requirements. Second, a robotic car requires nearly 100% correct perception results for its autonomous driving, while an experienced human driver works well with dynamic traffic environments, in which machine perception could easily produce noisy perception results. In this paper, we propose a vision-centered multi-sensor fusing framework for a traffic environment perception approach to autonomous driving, which fuses camera, LIDAR, and GIS information consistently via both geometrical and semantic constraints for efficient self-localization and obstacle perception. We also discuss robust machine vision algorithms that have been successfully integrated with the framework and address multiple levels of machine vision techniques, from collecting training data, efficiently processing sensor data, and extracting low-level features, to higher-level object and environment mapping. The proposed framework has been tested extensively in actual urban scenes with our self-developed robotic cars for eight years. The empirical results validate its robustness and efficiency.  相似文献   

18.
基于无模型自适应控制的无人驾驶汽车横向控制方法   总被引:3,自引:0,他引:3  
提出了一种基于无模型自适应控制的无人驾驶汽车横向控制方案.首先,将无人驾驶汽车循迹跟踪控制问题转化成预瞄偏差角跟踪问题,然后基于无人驾驶汽车横向控制系统的动态线性化数据模型,设计出无模型自适应控制算法、伪梯度估计算法和伪梯度重置算法,进而实现了自主车辆的无人驾驶.该方法的实现仅用到无人驾驶汽车运行时的输入输出数据,避免了对无人驾驶汽车进行复杂机理建模的难题,对于复杂的无人驾驶汽车运行过程具有很好的自适应性,对不同的无人驾驶车辆具有较强的可移植性.该方案已实际应用于清华大学无人驾驶汽车实验平台,在北京市丰台区的实地测试实验、在江苏省常熟市高速路的测试以及2015年"中国智能车未来挑战赛"的现场应用验证了所提方案的有效性.  相似文献   

19.
20.
See-and-avoid behaviors are an essential part of autonomous navigation for Unmanned Air Vehicles (UAVs). To be fully autonomous, a UAV must be able to navigate complex urban and near-earth environments and detect and avoid imminent collisions. While there have been significant research efforts in robotic navigation and obstacle avoidance during the past few years, this previous work has not focused on applications that use small autonomous UAVs. Specific UAV requirements such as non-invasive sensing, light payload, low image quality, high processing speed, long range detection, and low power consumption, etc., must be met in order to fully use this new technology. This paper presents single camera collision detection and avoidance algorithm. Whereas most algorithms attempt to extract the 3D information from a single optical flow value at each feature point, we propose to calculate a set of likely optical flow values and their associated probabilities—an optical flow probability distribution. Using this probability distribution, a more robust method for calculating object distance is developed. This method is developed for use on a UAV to detect obstacles, but it can be used on any vehicle where obstacle detection is needed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号