首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper proposes a new visual servoing quasi-min-max MPC algorithm for stabilization control of an omnidirectional wheeled mobile robot subject to physical and visual constraints. The visual servoing dynamics of the robot are modeled as the state-dependent linear error system with nonlinear control inputs of rotation and deflection velocities of wheels. The state-dependent linear error system is covered as linear parameters-varying models which is used to design the visual servoing quasi-min-max MPC controller. The actual control inputs of the robot are then computed by the solution of an inverse algebraic equation of the MPC actions. The recursive feasibility and stability of the new visual servoing MPC are ensured by some LMIs conditions. The performance and practicability of the visual servoing MPC are verified by some simulation and experiment results.  相似文献   

2.
The trajectory tracking control problem of dynamic nonholonomic wheeled mobile robots is considered via visual servoing feedback. A kinematic controller is firstly presented for the kinematic model, and then, an adaptive sliding mode controller is designed for the uncertain dynamic model in the presence of parametric uncertainties associated with the camera system. The proposed controller is robust not only to structured uncertainties such as mass variation but also to unstructured one such as disturbances. The asymptotic convergence of tracking errors to equilibrium point is rigorously proved by the Lyapunov method. Simulation results are provided to illustrate the performance of the control law.  相似文献   

3.
The trajectory tracking control problem of dynamic nonholonomic wheeled mobile robots is considered via visual servoing feedback.A kinematic controller is firstly presented for the kinematic model,and ...  相似文献   

4.
On the basis of the kinematic model of a unicycle mobile robot in polar coordinates, an adaptive visual servoing strategy is proposed to regulate the mobile robot to its desired pose. By regarding the unknown depth as model uncertainty, the system error vector can be chosen as measurable signals that are reconstructed by a motion estimation technique. Then, an adaptive controller is carefully designed along with a parameter updating mechanism to compensate for the unknown depth information online. On the basis of Lyapunov techniques and LaSalle's invariance principle, rigorous stability analysis is conducted. Because the control law is elegantly designed on the basis of the polar‐coordinate‐based representation of error dynamics, the consequent maneuver behavior is natural, and the resulting path is short. Experimental results are provided to verify the performance of the proposed approach. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

5.
This work presents an automated solution for tool changing in industrial robots using visual servoing and sliding mode control. The robustness of the proposed method is due to the control law of the visual servoing, which uses the information acquired by a vision system to close a feedback control loop. Furthermore, sliding mode control is simultaneously used in a prioritised level to satisfy the constraints typically present in a robot system: joint range limits, maximum joint speeds and allowed workspace. Thus, the global control accurately places the tool in the warehouse, but satisfying the robot constraints. The feasibility and effectiveness of the proposed approach is substantiated by simulation results for a complex 3D case study. Moreover, real experimentation with a 6R industrial manipulator is also presented to demonstrate the applicability of the method for tool changing.  相似文献   

6.
This paper presents a new proposal for positioning and guiding mobile robots in indoor environments. The proposal is based on the information provided by static cameras located in the movement environment. This proposal falls within the scope of what are known as intelligent environments; in this case, the environment is provided with cameras that, once calibrated, allow the position of the robots to be obtained. Based on this information, control orders for the robots can be generated using a radio frequency link. In order to facilitate identification of the robots, even under extremely adverse ambient lighting conditions, a beacon consisting of four circular elements constructed from infrared diodes is mounted on board the robots. In order to identify the beacon, an edge detection process is carried out. This is followed by a process that, based on the algebraic distance, obtains the estimated ellipses associated with each element of the beacon. Once the beacon has been identified, the coordinates of the centroids for the elements that make up the beacon are obtained on the various image planes. Based on these coordinates, an algorithm is proposed that takes into account the standard deviation of the error produced in the various cameras in ascertaining the coordinates of the beacon’s elements. An odometric system is also used in guidance that, in conjunction with a Kalman Filter, allows the position of the robot to be estimated during the time intervals required to process the visual information provided by the cameras.
Cristina LosadaEmail:
  相似文献   

7.
This paper presents a novel approach for image-based visual servoing, extending the existing works that use the trifocal tensor (TT) as source for image measurements. In the proposed approach, singularities typically encountered in this kind of methods are avoided. A formulation of the TT-based control problem with a virtual target resulting from the vertical translation of the real target allows us to design a single controller, able to regulate the robot pose towards the desired configuration, without local minima. In this context, we introduce a super-twisting control scheme guaranteeing continuous control inputs, while exhibiting strong robustness properties. Our approach is valid for perspective cameras as well as catadioptric systems obeying the central camera model. All these contributions are supported by convincing numerical simulations and experiments under a popular dynamic robot simulator.  相似文献   

8.
This paper investigates finite-time tracking control problem of multiple non-holonomic mobile robots via visual servoing. It is assumed that the pinhole camera is fixed to the ceiling, and camera parameters are unknown. The desired reference trajectory is represented by a virtual leader whose states are available to only a subset of the followers, and the followers have only interaction. First, the camera-objective visual kinematic model is introduced by utilising the pinhole camera model for each mobile robot. Second, a unified tracking error system between camera-objective visual servoing model and desired reference trajectory is introduced. Third, based on the neighbour rule and by using finite-time control method, continuous distributed cooperative finite-time tracking control laws are designed for each mobile robot with unknown camera parameters, where the communication topology among the multiple mobile robots is assumed to be a directed graph. Rigorous proof shows that the group of mobile robots converges to the desired reference trajectory in finite time. Simulation example illustrates the effectiveness of our method.  相似文献   

9.
针对传统的视觉伺服方法中图像几何特征的标记、提取与匹配过程复杂且通用性差等问题,本文提出了一种基于图像矩的机器人四自由度(4DOF)视觉伺服方法.首先建立了眼在手系统中图像矩与机器人位姿之间的非线性增量变换关系,为利用图像矩进行机器人视觉伺服控制提供了理论基础,然后在未对摄像机与手眼关系进行标定的情况下,利用反向传播(BP)神经网络的非线性映射特性设计了基于图像矩的机器人视觉伺服控制方案,最后用训练好的神经刚络进行了视觉伺服跟踪控制.实验结果表明基于本文算法可实现0.5 mm的位置与0.5°的姿态跟踪精度,验证了算法的的有效性与较好的伺服性能.  相似文献   

10.
对视觉伺服反馈的一类类非完整移动机器人,提出在视觉参数不确定下的有限时间饱和镇定问题.运用多步控制策略和有限时间稳定性理论,设计分段连续的饱和控制律使得闭环系统的状态在有限时间内收敛到平衡点.仿真结果验证了该方法的有效性.  相似文献   

11.
We examined human navigational principles for intercepting a projected object and tested their application in the design of navigational algorithms for mobile robots. These perceptual principles utilize a viewer-based geometry that allows the robot to approach the target without need of time-consuming calculations to determine the world coordinates of either itself or the target. Human research supports the use of an Optical Acceleration Cancellation (OAC) strategy to achieve interception. Here, the fielder selects a running path that nulls out the acceleration of the retinal image of an approaching ball, and maintains an image that rises at a constant rate throughout the task. We compare two robotic control algorithms for implementing the OAC strategy in cases in which the target remains in the sagittal plane headed directly toward the robot (which only moves forward or backward). In the “passive” algorithm, the robot keeps the orientation of the camera constant, and the image of the ball rises at a constant rate. In the “active” algorithm, the robot maintains a camera fixation that is centered on the image of the ball and keeps the tangent of the camera angle rising at a constant rate. Performance was superior with the active algorithm in both computer simulations and trials with actual mobile robots. The performance advantage is principally due to the higher gain and effectively wider viewing angle when the camera remains centered on the ball image. The findings confirm the viability and robustness of human perceptual principles in the design of mobile robot algorithms for tasks like interception. Thomas Sugar works in the areas of mobile robot navigation and wearable robotics assisting gait of stroke survivors. In mobile robot navigation, he is interested in combining human perceptual principles with mobile robotics. He majored in business and mechanical engineering for his Bachelors degrees and mechanical engineering for his Doctoral degree all from the University of Pennsylvania. In industry, he worked as a project engineer for W. L. Gore and Associates. He has been a faculty member in the Department of Mechanical and Aerospace Engineering and the Department of Engineering at Arizona State University. His research is currently funded by three grants from the National Sciences Foundation and the National Institutes of Health, and focuses on perception and action, and wearable robots using tunable springs. Michael McBeath works in the area combining Psychology and Engineering. He majored in both fields for his Bachelors degree from Brown University and again for his Doctoral degree from Stanford University. Parallel to his academic career, he worked as a research scientist at NASA—Ames Research Center, and at the Interval Corporation, a technology think tank funded by Microsoft co-founder, Paul Allen. He has been a faculty member in the Department of Psychology at Kent State University and at Arizona State University, where he is Program Director for the Cognition and Behavior area, and is on the Executive Committee for the interdisciplinary Arts, Media, and Engineering program. His research is currently funded by three grants from the National Sciences Foundation, and focuses on perception and action, particularly in sports. He is best known for his research on navigational strategies used by baseball players, animals, and robots.  相似文献   

12.
A method is described which recovers the 3-D shape of deformable objects, particularly human motions, from mobile stereo images. In the proposed technique, camera calibration is not required when taking images. Existing optical 3-D modeling systems must employ calibrated cameras that are set at fixed positions. This inevitably puts constraints on the range of the movement of an object. In the proposed method, multiple mobile cameras take images of a deformable object moving freely, and its 3-D model is reconstructed from the video image streams obtained. The advantages of the proposed method include the fact that the cameras employed are calibration-free, and that the image-taking cameras can move freely. The theory is described, and the performance is shown by an experiment on 3-D human motion modeling in an outdoor environment. The accuracy of the 3-D model obtained is evaluated and a discussion is given. This work was presented in part at the 10th International Symposium on Artificial Life and Robotics, Oita, Japan, February 4–6, 2005  相似文献   

13.
Time to contact or time to collision (TTC) is utmost important information for animals as well as for mobile robots because it enables them to avoid obstacles; it is a convenient way to analyze the surrounding environment. The problem of TTC estimation is largely discussed in perspective images. Although a lot of works have shown the interest of omnidirectional camera for robotic applications such as localization, motion, monitoring, few works use omnidirectional images to compute the TTC. In this paper, we show that TTC can be also estimated on catadioptric images. We present two approaches for TTC estimation using directly or indirectly the optical flow based on de-rotation strategy. The first, called “gradient based TTC”, is simple, fast and it does not need an explicit estimation of the optical flow. Nevertheless, this method cannot provide a TTC on each pixel, valid only for para-catadioptric sensors and requires an initial segmentation of the obstacle. The second method, called “TTC map estimation based on optical flow”, estimates TTC on each point on the image and provides the depth map of the environment for any obstacle in any direction and is valid for all central catadioptric sensors. Some results and comparisons in synthetic and real images will be given.  相似文献   

14.
This paper presents an approach to adaptive trajectory tracking of mobile robots which combines a feedback linearization based on a nominal model and a RBF-NN adaptive dynamic compensation. For a robot with uncertain dynamic parameters, two controllers are implemented separately: a kinematics controller and an inverse dynamics controller. The uncertainty in the nominal dynamics model is compensated by a neural adaptive feedback controller. The resulting adaptive controller is efficient and robust in the sense that it succeeds to achieve a good tracking performance with a small computational effort. The analysis of the RBF-NN approximation error on the control errors is included. Finally, the performance of the control system is verified through experiments.  相似文献   

15.
We propose a novel calibration method for catadioptric systems made up of an axial symmetrical mirror and a pinhole camera with its optical center located at the mirror axis. The calibration estimates the relative camera/mirror position and the extrinsic rotation and translation w.r.t. the world frame. The procedure requires a single image of a (possibly planar) calibration object. We show how most of the calibration parameters can be estimated using linear methods (Direct-Linear-Transformation algorithm) and cross-ratio. Two remaining parameters are obtained by using non-linear optimization. We present experimental results on simulated and real images.  相似文献   

16.
This article deals with the development of learning methods for an intelligent control system for an autonomous mobile robot. On the basis of visual servoing, an approach to learning the skill of tracking colored guidelines is proposed. This approach utilizes a robust and adaptive image processing method to acquire features of the colored guidelines and convert them into the controller input. The supervised learning procedure and the neural network controller are discussed. The method of obtaining the learning data and training the neural network are described. Experimental results are presented at the end of the article. This work was presented, in part, at the Sixth International Symposium on Artificial Life and Robotics, Tokyo, Japan, January 15–17, 2001  相似文献   

17.
The stabilising problem of stochastic non-holonomic mobile robots with uncertain parameters based on visual servoing is addressed in this paper. The model of non-holonomic mobile robots based on visual servoing is extended to the stochastic case, where their forward velocity and angular velocity are both subject to some stochastic disturbances. Based on backstepping technique, state-feedback stabilising controllers are designed for stochastic non-holonomic mobile robots. A switching control strategy for the original system is presented. The proposed controllers guarantee that the closed-loop system is asymptotically stabilised at the zero equilibrium point in probability.  相似文献   

18.
考虑具有可见性约束和执行器约束的载荷不确定移动机器人视觉伺服系统,提出一种鲁棒视觉伺服预测控制策略.首先将该移动机器人视觉伺服系统建模为关于视觉伺服误差和驱动的不确定系统.其次,对约束的视觉伺服误差子系统,设计基于半正定规划的速度规划预测控制算法.该算法分为离线计算和在线调度两个部分,降低预测控制算法的在线计算量.而对...  相似文献   

19.
针对移动机器人位姿镇定问题, 本文提出基于视觉同时定位与建图(simultaneous localization and mapping, SLAM)–伺服框架的指令滤波反步控制策略. 具体而言, 通过加速度层控制器设计进而积分得到的光滑速度信号, 减小SLAM视觉模块的预测位姿误差; 继而应用指令滤波器简化控制器设计的复杂求导运算, 减轻计算负担; 此外, SLAM模块利用运动信息与视觉信息的融合解决未知尺度问题, 降低未知深度造成的控制器设计复杂度. 通过李雅普诺夫理论可以证明闭环系统的稳定性. 仿真和实验结果最终验证了本文算法的有效性.  相似文献   

20.
为了解决机器人上装置传统相机而产生的视野域有限的问题,采用了全景相机,并且针对采用近似线性输入输出反馈控制模型中近似与假设过多的情况,提出了一种使用极线几何与三角几何相结合的方法,更简便地实现基于图像的视觉伺服运用于移动机器人的情况.由于是基于图像的视觉伺服,该方法不需要预先知道三维场景的结构知识.实验仿真结果表明了该方法的有效性.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号