首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 102 毫秒
1.
The use of unmanned aerial vehicles (UAVs) for military, scientific, and civilian sectors are increasing drastically in recent years. This study presents algorithms for the visual-servo control of an UAV, in which a quadrotor helicopter has been stabilized with visual information through the control loop. Unlike previous study that use pose estimation approach which is time consuming and subject to various errors, the visual-servo control is more reliable and fast. The method requires a camera on-board the vehicle, which is already available on various UAV systems. The UAV with a camera behaves like an eye-in-hand visual servoing system. In this study the controller was designed by using two different approaches; image based visual servo control method and hybrid visual servo control method. Various simulations are developed on Matlab, in which the quadrotor aerial vehicle has been visual-servo controlled. In order to show the effectiveness of the algorithms, experiments were performed on a model quadrotor UAV, which suggest successful performance.  相似文献   

2.
This paper presents a vision-based navigation strategy for a vertical take-off and landing (VTOL) unmanned aerial vehicle (UAV) using a single embedded camera observing natural landmarks. In the proposed approach, images of the environment are first sampled, stored and organized as a set of ordered key images (visual path) which provides a visual memory of the environment. The robot navigation task is then defined as a concatenation of visual path subsets (called visual route) linking the current observed image and a target image belonging to the visual memory. The UAV is controlled to reach each image of the visual route using a vision-based control law adapted to its dynamic model and without explicitly planning any trajectory. This framework is largely substantiated by experiments with an X4-flyer equipped with a fisheye camera.  相似文献   

3.
针对现有无人机导航控制方法存在的控制效果不佳的问题,本文提出一种基于粒子滤波的无人机自主轨迹视觉导航控制方法研究。利用粒子滤波算法,实现对无人机自主轨迹视觉导航控制方法的优化设计。采用栅格法构建无人机飞行环境地图,根据无人机的机械组成结构和工作原理,构建运动状态模型。利用内置的摄像机设备采集视觉图像,执行图像灰度转换、几何校正、滤波等预处理步骤。通过对视觉图像的特征提取,判断当前环境是否存在障碍物。利用粒子滤波算法确定无人机位姿,结合障碍物识别结果规划无人机的自主飞行轨迹。将位置、速度和姿态角的控制量计算结果,输入到安装的导航控制器中,完成无人机的自主轨迹视觉导航控制任务。通过实测分析得出结论:应用设计的导航控制方法,其位置误差、速度误差以及姿态角误差均维持在预设值以下,即设计的导航控制方法具有良好的控制效果。  相似文献   

4.
In this paper, we use computer vision as a feedback sensor in a control loop for landing an unmanned air vehicle (UAV) on a landing pad. The vision problem we address here is then a special case of the classic ego-motion estimation problem since all feature points lie on a planar surface (the landing pad). We study together the discrete and differential versions of the ego-motion estimation, in order to obtain both position and velocity of the UAV relative to the landing pad. After briefly reviewing existing algorithm for the discrete case, we present, in a unified geometric framework, a new estimation scheme for solving the differential case. We further show how the obtained algorithms enable the vision sensor to be placed in the feedback loop as a state observer for landing control. These algorithms are linear, numerically robust, and computationally inexpensive hence suitable for real-time implementation. We present a thorough performance evaluation of the motion estimation algorithms under varying levels of image measurement noise, altitudes of the camera above the landing pad, and different camera motions relative to the landing pad. A landing controller is then designed for a full dynamic model of the UAV. Using geometric nonlinear control theory, the dynamics of the UAV are decoupled into an inner system and outer system. The proposed control scheme is then based on the differential flatness of the outer system. For the overall closed-loop system, conditions are provided under which exponential stability can be guaranteed. In the closed-loop system, the controller is tightly coupled with the vision based state estimation and the only auxiliary sensor are accelerometers for measuring acceleration of the UAV. Finally, we show through simulation results that the designed vision-in-the-loop controller generates stable landing maneuvers even for large levels of image measurement noise. Experiments on a real UAV will be presented in future work.  相似文献   

5.
In this paper, two techniques to control UAVs (Unmanned Aerial Vehicles), based on visual information are presented. The first one is based on the detection and tracking of planar structures from an on-board camera, while the second one is based on the detection and 3D reconstruction of the position of the UAV based on an external camera system. Both strategies are tested with a VTOL (Vertical take-off and landing) UAV, and results show good behavior of the visual systems (precision in the estimation and frame rate) when estimating the helicopter??s position and using the extracted information to control the UAV.  相似文献   

6.
This paper describes a camera position control with aerial manipulator for visual test of bridge inspection. Our developed unmanned aerial vehicle (UAV) has three‐degree‐of‐freedom (3‐DoF) manipulator on its top to execute visual or hammering test of the inspection. This paper focuses on the visual test. A camera was implemented at the end of the manipulator to acquire images of the narrow space of the bridge such as bearings, which the conventional UAV without the camera‐attached manipulators at its top cannot achieve the fine visual test. For the visual test, it is desirable that the camera is above the body with enough distance between the camera and the body. As obvious, the camera position in the inertial coordinate system is effected by the movement of the body. Therefore we implement the camera position control compensating the body movement into the UAV. As a result of an experiment, it is assessed that the proposed control reduces the position error of the camera comparing the one of the body. The mean position error of the camera is 0.039 m that is 51.4% of the one of the body. Our world‐first study enables to acquire the image of the bearing of the bridge by a camera mounted at the end effector of aerial manipulator fixed on UAV.  相似文献   

7.
针对传统的视觉同步定位与地图创建(visual simultaneous localization and mapping,VSLAM)算法在室内弱纹理场景中容易因为特征缺失而定位失败的问题,提出了一种基于最大Fisher信息量云台控制的主动SLAM算法。该方法在经典的ORB-SLAM2框架上进行扩展,增加了Fisher信息场构建模块与云台控制模块。在视觉跟踪的同时,将三维空间划分成若干个体素,根据特征点的空间位置分布更新每个体素的Fisher信息,完成Fisher信息场的构建;当相机获取的图像遇到特征缺失的情况,先找到离相机光心欧式距离最近的体素,以该体素Fisher信息量最大的方向作为相机此时的最优观测方向;计算出坐标变换后相机的偏转角度,通过机载云台实现相机转动到最优观测方向,重新获得场景特征,使得算法在丢失特征之后能够实现自主重定位。将改进后的算法运用到四旋翼无人机仿真平台,结果表明在传统算法失效的情况下,所提算法仍能实时准确地估计无人机位姿,提高了系统的鲁棒性。  相似文献   

8.
为评测悬停状态无人机的稳定性,设计了一套基于双目立体视觉的无人机悬停状态评测系统。首先结合双目相机标定对视频图像校正,并利用提出的邻域灰度改进分级匹配策略,获取匹配特征点;然后根据重建原理计算求得视野中无人机三维坐标数据;最后利用悬停精度测量函数评测无人机的稳定程度。实验证明系统有效匹配目标的特征点,计算出目标三维坐标,同时根据其相对控制精度的浮动,对无人机悬停状态进行评测。  相似文献   

9.
This paper presents an implementation of an aircraft pose and motion estimator using visual systems as the principal sensor for controlling an Unmanned Aerial Vehicle (UAV) or as a redundant system for an Inertial Measure Unit (IMU) and gyros sensors. First, we explore the applications of the unified theory for central catadioptric cameras for attitude and heading estimation, explaining how the skyline is projected on the catadioptric image and how it is segmented and used to calculate the UAV’s attitude. Then we use appearance images to obtain a visual compass, and we calculate the relative rotation and heading of the aerial vehicle. Additionally, we show the use of a stereo system to calculate the aircraft height and to measure the UAV’s motion. Finally, we present a visual tracking system based on Fuzzy controllers working in both a UAV and a camera pan and tilt platform. Every part is tested using the UAV COLIBRI platform to validate the different approaches, which include comparison of the estimated data with the inertial values measured onboard the helicopter platform and the validation of the tracking schemes on real flights.  相似文献   

10.
为了有效确保移动机器人视觉伺服控制效果,提高移动机器人视觉伺服控制精度,设计了基于虚拟现实技术的移动机器人视觉伺服控制系统。通过三维视觉传感器和立体显示器等虚拟环境的I/O设备、位姿传感器、视觉图像处理器以及伺服控制器元件,完成系统硬件设计。从运动学和动力学两个方面,搭建移动机器人数学模型,利用标定的视觉相机,生成移动机器人实时视觉图像,通过图像滤波、畸变校正等步骤,完成图像的预处理。利用视觉图像,构建移动机器人虚拟移动环境。在虚拟现实技术下,通过目标定位、路线生成、碰撞检测、路线调整等步骤,规划移动机器人行动路线,通过控制量的计算,实现视觉伺服控制功能。系统测试结果表明,所设计控制系统的位置控制误差较小,姿态角和移动速度控制误差仅为0.05°和0.12m/s,移动机器人碰撞次数较少,具有较好的移动机器人视觉伺服控制效果,能够有效提高移动机器人视觉伺服控制精度。  相似文献   

11.
An image-based visual servo control is presented for an unmanned aerial vehicle (UAV) capable of stationary or quasi-stationary flight with the camera mounted onboard the vehicle. The target considered consists of a finite set of stationary and disjoint points lying in a plane. Control of the position and orientation dynamics is decoupled using a visual error based on spherical centroid data, along with estimations of the linear velocity and the gravitational inertial direction extracted from image features and an embedded inertial measurement unit. The visual error used compensates for poor conditioning of the image Jacobian matrix by introducing a nonhomogeneous gain term adapted to the visual sensitivity of the error measurements. A nonlinear controller, that ensures exponential convergence of the system considered, is derived for the full dynamics of the system using control Lyapunov function design techniques. Experimental results on a quadrotor UAV, developed by the French Atomic Energy Commission, demonstrate the robustness and performance of the proposed control strategy.  相似文献   

12.
增强现实技术是近年来人机交互领域的研究热点。在增强现实环境下加入触觉感知,可使用户在真实场景中看到并感知到虚拟对象。为了实现增强现实环境下与虚拟对象之间更加自然的交互,提出一种视触觉融合的三维注册方法。基于图像视觉技术获得三维注册矩阵;借助空间转换关系求解出触觉空间与图像空间的转换关系;结合两者与摄像头空间的关系实现视触觉融合的增强现实交互场景。为验证该方法的有效性,设计了一个基于视触觉增强现实的组装机器人项目。用户可触摸并移动真实环境中的机器人零件,还能在触摸时感受到反馈力,使交互更具真实感。  相似文献   

13.
针对四旋翼无人机图像姿态倾角大、图像变形明显等特点,采用尺度不变特征变换(SIFT)算法和薄板样条模型(TPS)对四旋翼无人机图像进行特征点匹配和配准实验研究,从拼接图像的目视效果和配准均方差方面比较分析了TPS模型与常用的仿射变换及多项式变换模型的图像配准效果。结果表明:在SIFT算法精确的同名点匹配下,TPS变换模型能够兼顾四旋翼无人机图像的整体刚性变形及局部的非刚性变形,无论是目视效果还是均方差定量分析,TPS变换的图像配准精度最高\,效果最好,能够满足四旋翼无人机图像的快速配准、拼接要求。  相似文献   

14.
无人机在室内等复杂环境中飞行时,存在GPS信号较弱、惯性传感器累计误差较大等问题,导致无法实现室内的精准定位.本文提出一种基于粒子群圆检测算法的无人机目标定位方法,该方法通过OpenCV视觉模组进行图像预处理,并通过增量式PID(Proportion Integration Differentiation)与图像滤波相...  相似文献   

15.
史豪斌  徐梦  刘珈妤  李继超 《控制与决策》2019,34(12):2517-2526
基于图像的视觉伺服机器人控制方法通过机器人的视觉获取图像信息,然后形成基于图像信息的闭环反馈来控制机器人的合理运动.经典视觉伺服的伺服增益的选取在大多数条件下是人工赋值的,故存在鲁棒性差、收敛速度慢等问题.针对该问题,提出一种基于Dyna-Q的旋翼无人机视觉伺服智能控制方法调节伺服增益以提高其自适应性.首先,使用基于费尔曼链码的图像特征提取算法提取目标特征点;然后,使用基于图像的视觉伺服形成特征误差的闭环控制;其次,针对旋翼无人机强耦合欠驱动的动力学特性提出一种解耦的视觉伺服控制模型;最后,建立使用Dyna-Q学习调节伺服增益的强化学习模型,通过训练可以使得旋翼无人机自主选择伺服增益.Dyna-Q学习在经典的Q学习的基础上通过建立环境模型来存储经验,环境模型产生的虚拟样本可以作为学习样本来进行值函数的迭代.实验结果表明,所提出的方法相比于传统控制方法PID控制以及经典的基于图像视觉伺服方法具有收敛速度快、稳定性高的优势.  相似文献   

16.
由于无人机航拍图像的采样节点重叠度较高,所以在无人机航拍图像配准过程中无法避免像素点错误匹配的问题,提出基于邻域一致性的无人机航拍图像配准过程控制技术。根据无人机航拍图像配准区域划定标准,识别关键图像标注点,通过计算一致性测度值指标设计基于邻域一致性的无人机航拍图像融合方法。结合图像融合结果进行航拍图像地理定位,根据采样灰度因子的数值水平分析图像采样节点之间的重叠关系,对航拍图像进行坐标变换与几何校正。结合坐标变换与几何校正结果提取图像边界特征点,将特征点作为配准过程控制点,根据控制点分布质量与尺度空间极值实现无人机航拍图像配准过程控制。实验结果表明,在邻域一致性原则作用下,无人机航拍图像采样节点之间的重叠度明显下降,像素点错误匹配问题得到了解决,配准效果好。  相似文献   

17.
This paper presents the design of a stable non-linear control system for the remote visual tracking of cellular robots. The robots are controlled through visual feedback based on the processing of the image captured by a fixed video camera observing the workspace. The control algorithm is based only on measurements on the image plane of the visual camera–direct visual control–thus avoiding the problems related to camera calibration. In addition, the camera plane may have any (unknown) orientation with respect to the robot workspace. The controller uses an on-line estimation of the image Jacobians. Considering the Jacobians’ estimation errors, the control system is capable of tracking a reference point moving on the image plane–defining the reference trajectory–with an ultimately bounded error. An obstacle avoidance strategy is also developed in the same context, based on the visual impedance concept. Experimental results show the performance of the overall control system.  相似文献   

18.
无人机自主降落是无人机领域研究的热点之一,导航信息在自主降落过程中又起到至关重要的作用,而视觉导航相较于传统导航方式可以提供更多环境信息,有利于提高无人机着陆安全性。当无人机飞行高度越高,机载相机捕获到的降落标识物就越小,为了提升无人机识别标识物的能力,基于YOLOv5s算法提出了一种改进的无人机实时小目标检测算法。首先,为了检测到更小尺度的目标在原算法基础上新增一个检测头;然后采用BiFPN代替原先PANet结构,提升不同尺度的检测效果;最后将EIoU Loss替换CIoU Loss作为算法的损失函数,在提高边界框回归速率的同时提高模型整体性能。将改进算法应用于无人机自主降落场景下的二维码降落标识检测,实验结果表明改进后的算法在小目标检测中相比于原始YOLOv5s算法的特征提取能力更强、检测精度更高,证明了改进算法的优越性。  相似文献   

19.
针对基于Time-of-Flight(TOF)相机的彩色目标三维重建需标定CCD相机与TOF相机联合系统的几何参数,在研究现有的基于彩色图像和TOF深度图像标定算法的基础上,提出了一种基于平面棋盘模板的标定方法。拍摄了固定在平面标定模板上的彩色棋盘图案在不同角度下的彩色图像和振幅图像,改进了Harris角点提取,根据棋盘格上角点与虚拟像点的共轭关系,建立了相机标定系统模型,利用Levenberg-Marquardt算法求解,进行了标定实验。获取了TOF与CCD相机内参数,并利用像平面之间的位姿关系估计两相机坐标系的相对姿态,最后进行联合优化,获取了相机之间的旋转矩阵与平移向量。实验结果表明,提出的算法优化了求解过程,提高了标定效率,能够获得较高的精度。  相似文献   

20.
In this paper, the visual servoing problem is addressed by coupling nonlinear control theory with a convenient representation of the visual information used by the robot. The visual representation, which is based on a linear camera model, is extremely compact to comply with active vision requirements. The devised control law is proven to ensure global asymptotic stability in the Lyapunov sense, assuming exact model and state measurements. It is also shown that, in the presence of bounded uncertainties, the closed-loop behavior is characterized by a global attractor. The well known pose ambiguity arising from the use of linear camera models is solved at the control level by choosing a hybrid visual state vector including both image space (2D) information and 3D object parameters. A method is expounded for on-line visual state estimation that avoids camera calibration. Simulation and real-time experiments validate the theoretical framework in terms of both system convergence and control robustness.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号