首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 671 毫秒
1.
模型无关的无定标视觉伺服控制   总被引:13,自引:1,他引:13  
本文首先介绍了视觉伺服的一般原理.然后提出了一种模型无关的无定标视觉伺服控制方法,在这种方法中不需要机器人模型和摄像机模型,应用方差最小化的原理推导出了模型无关的无定标视觉伺服控制律.此外还给出了图像雅可比矩阵的递推公式.文章最后通过一个轨线跟踪的仿真实验验证了算法的正确性和有效性.  相似文献   

2.
This paper discusses the principles for the acquisition of a three-dimensional (3-D) computational model of the treatment area of a burn victim for a vision-servo-guided robot which ablates the victim's burned skin tissue by delivering a high-energy laser light to the burned tissue. The medical robotics assistant system consists of: a robot whose end effector is equipped with a laser head, whence the laser beam emanates, and a vision system which is used to acquire the 3-D coordinates of some points on the body surface; 3-D surface modeling routines for generating the surface model of the treatment area; and control and interface hardware and software for control and integration of all the system components. Discussion of the vision and surface modeling component of the medical robotics assistant system is the focus of this paper. The robot-assisted treatment process has two phases: an initial survey phase during which a model of the treatment area on the skin is built and used to plan an appropriate trajectory for the robot in the subsequent phase—the treatment phase, during which the laser surgery is performed. During the survey phase, the vision system employs a camera to acquire points on the surface of the patient's body by using the camera to capture the contour traced by a plane of light generated by a low power laser, distinct from the treatment laser. The camera's image is then processed. Selected points on the camera's two-dimensional image frame are used as input to a process that generates 3-D body surface points as the intersection point of the plane of light and the line of sight between the camera's image point and the body surface point. The acquired body surface points are then used to generate a computational model of the treatment area using the non-uniform rational B-splines (NURBS) surface modeling technique. The constructed NURBS surface model is used to generate a treatment plan for the execution of the treatment phase. The robot plan for treatment is discussed in another paper. The prototype of the entire burn treatment system is at an advanced stage of development and tests of the engineering principles on inanimate objects, discussed herein, are being conducted.  相似文献   

3.
A vision-based scheme for object recognition and transport with a mobile robot is proposed in this paper. First, camera calibration is experimentally performed with Zhenyou Zhang’s method, and a distance measurement method with the monocular camera is presented and tested. Second, Kalman filtering algorithm is used to predict the movement of a target with HSI model as the input and the seed filling algorithm as the image segmentation approach. Finally, the motion control of the pan-tilt camera and mobile robot is designed to fulfill the tracking and transport task. The experiment results demonstrate the robust object recognition and fast tracking capabilities of the proposed scheme.  相似文献   

4.
具有深度自适应估计的视觉伺服优化   总被引:1,自引:0,他引:1  
在手眼机器人视觉伺服中,如何确定机器人末端摄像机移动的速度和对物体的深度进行有效的估计还没有较好的解决方法.本文采用一般模型法,通过求解最优化控制问题来设计摄像机的速度,同时,利用物体初始及期望位置的深度估计值,提出了一种自适应估计的算法对物体的深度进行估计,给出了深度变化趋势,实现了基于图像的定位控制.该方法能够使机器人在工作空间范围内从任一初始位置出发到达期望位置,实现了系统的全局渐近稳定且不需要物体的几何模型及深度的精确值.最后给出的仿真实例表明了本方法的有效性.  相似文献   

5.
拟人控制二维单倒立摆   总被引:25,自引:2,他引:25       下载免费PDF全文
以二维单极倒立摆为被控对象,利用拟人控制的思想形成非线性控制律,并确定出反馈系数间的相对关系。通过在线调试实现定性控制律的量化,从而成功地控制了二维单级倒立摆的稳定。与传统的控制系统设计相比,不依赖于数学模型,不受线性约束;与模糊控制相比,不需要人类直接被控对象的经验。拟人控制方法设计简单易行,且得到的控制系统具有较强的鲁棒性。  相似文献   

6.
杨芳  王朝立 《自动化学报》2011,37(7):857-864
研究了带有固定在天花板上的摄像机系统的非完整动态移动机器人的镇定问题. 首先, 利用针孔摄像机模型引入了基于摄像机目标的视觉伺服运动学模型,并针对该运动学模型给出了一个运动学镇定控制器. 然后,在摄像机参数不确定的情形下设计了一个自适应滑模控制器实现了不确定动态移动机器人的镇定. 提出的控制器不仅对结构不确定性如质量变化, 而且对无结构不确定性如外部扰动都具有鲁棒性. 通过Lyapunov方法严格证明了提出的控制系统的稳定性和估计参数的有界性. 仿真结果证实了控制律的有效性.  相似文献   

7.
智能空间中家庭服务机器人所需完成的主要任务是协助人完成物品的搜寻、定位与传递。而视觉伺服则是完成上述任务的有效手段。搭建了由移动机器人、机械臂、摄像头组成的家庭服务机器人视觉伺服系统,建立了此系统的运动学模型并对安装在机械臂末端执行器上的视觉系统进行了内外参数标定,通过分解世界平面的单应来获取目标物品的位姿参数,利用所获取的位姿参数设计了基于位置的视觉伺服控制律。实验结果表明,使用平面单应分解方法来设计控制律可简单有效地完成家庭物品的视觉伺服任务。  相似文献   

8.
吴健康  高枫 《机器人》1990,12(5):35-39
三维物体的表达和识别是图象理解和场景分析的核心问题,三维模型在三维物体的识别和场景分析中具有十分重要的作用.三维模型应该是以物体为中心的,能够提供该场景的所有有用信息.物体的大小,形状及朝向应均可从该模型中提取得到.本文提出了一种新的三维物体模型——广义的以物体为中心的行程编码.它包括物体的GORC物理数据结构,详细的形状描述和抽象描述.物体的高层次的表达可以通过以GORC编码的物理数据直接提取得到.三维的GORC是二维的以物体为中心的行程编码在三维上的推广,它兼有物体的体积表达和表面表达的优点.三维物体的GORC模型可以很容易地由其深度信息构造得出,基于GORC的投影运算,图象代数运算以及特征提取均可非常有效地实现.  相似文献   

9.
Three-dimensional (3-D) models of outdoor scenes are widely used for object recognition, navigation, mixed reality, and so on. Because such models are often made manually with high costs, automatic 3-D reconstruction has been widely investigated. In related work, a dense 3-D model is generated by using a stereo method. However, such approaches cannot use several hundreds images together for dense depth estimation because it is difficult to accurately calibrate a large number of cameras. In this paper, we propose a dense 3-D reconstruction method that first estimates extrinsic camera parameters of a hand-held video camera, and then reconstructs a dense 3-D model of a scene. In the first process, extrinsic camera parameters are estimated by tracking a small number of predefined markers of known 3-D positions and natural features automatically. Then, several hundreds dense depth maps obtained by multi-baseline stereo are combined together in a voxel space.So, we can acquire a dense 3-D model of the outdoor scene accurately by using several hundreds input images captured by a hand-held video camera.  相似文献   

10.
Robust camera pose and scene structure analysis for service robotics   总被引:1,自引:0,他引:1  
Successful path planning and object manipulation in service robotics applications rely both on a good estimation of the robot’s position and orientation (pose) in the environment, as well as on a reliable understanding of the visualized scene. In this paper a robust real-time camera pose and a scene structure estimation system is proposed. First, the pose of the camera is estimated through the analysis of the so-called tracks. The tracks include key features from the imaged scene and geometric constraints which are used to solve the pose estimation problem. Second, based on the calculated pose of the camera, i.e. robot, the scene is analyzed via a robust depth segmentation and object classification approach. In order to reliably segment the object’s depth, a feedback control technique at an image processing level has been used with the purpose of improving the robustness of the robotic vision system with respect to external influences, such as cluttered scenes and variable illumination conditions. The control strategy detailed in this paper is based on the traditional open-loop mathematical model of the depth estimation process. In order to control a robotic system, the obtained visual information is classified into objects of interest and obstacles. The proposed scene analysis architecture is evaluated through experimental results within a robotic collision avoidance system.  相似文献   

11.
This paper presents a 3D contour reconstruction approach employing a wheeled mobile robot equipped with an active laser‐vision system. With observation from an onboard CCD camera, a laser line projector fixed‐mounted below the camera is used for detecting the bottom shape of an object while an actively‐controlled upper laser line projector is utilized for 3D contour reconstruction. The mobile robot is driven to move around the object by a visual servoing and localization technique while the 3D contour of the object is being reconstructed based on the 2D image of the projected laser line. Asymptotical convergence of the closed‐loop system has been established. The proposed algorithm also has been used experimentally with a Dr Robot X80sv mobile robot upgraded with the low‐cost active laser‐vision system, thereby demonstrating effective real‐time performance. This seemingly novel laser‐vision robotic system can be applied further in unknown environments for obstacle avoidance and guidance control tasks. Copyright © 2011 John Wiley and Sons Asia Pte Ltd and Chinese Automatic Control Society  相似文献   

12.
王昱欣  王贺升  陈卫东 《机器人》2018,40(5):619-625
当末端带有相机的连续型软体机器人进行作业时,由于避障、安全性等多方面因素,既需要末端相机-机器人系统的视觉伺服,也需要机器人的整体形状控制.针对这个问题,本文提出了一种软体机器人手眼视觉/形状混合控制方法.该方法无需知道空间特征点的3维坐标,只需给定特征点在末端相机像平面的期望像素坐标和软体机器人的期望形状就可达到控制目的.建立了软体机器人的运动学模型,利用该模型,结合深度无关交互矩阵自适应手眼视觉控制和软体机器人形状控制,提出了一种混合控制律,并用李亚普诺夫稳定性理论对该控制律进行证明.仿真和实验的结果均表明,末端相机特征点像素坐标和形状可以收敛到期望值.  相似文献   

13.
This paper concerns the exploration of a natural environment by a mobile robot equipped with both a video color camera and a stereo-vision system. We focus on the interest of such a multi-sensory system to deal with the navigation of a robot in an a priori unknown environment, including (1) the incremental construction of a landmark-based model, and the use of these landmarks for (2) the 3-D localization of the mobile robot and for (3) a sensor-based navigation mode.For robot localization, a slow process and a fast one are simultaneously executed during the robot motions. In the modeling process (currently 0.1 Hz), the global landmark-based model is incrementally built and the robot situation can be estimated from discriminant landmarks selected amongst the detected objects in the range data. In the tracking process (currently 4 Hz), selected landmarks are tracked in the visual data; the tracking results are used to simplify the matching between landmarks in the modeling process.Finally, a sensor-based visual navigation mode, based on the same landmark selection and tracking, is also presented; in order to navigate during a long robot motion, different landmarks (targets) can be selected as a sequence of sub-goals that the robot must successively reach.  相似文献   

14.
Vision for Robotics: a tool for model-based object tracking   总被引:1,自引:0,他引:1  
Vision for Robotics (V4R) is a software package for tracking rigid objects in unknown surroundings. Its output is the 3-D pose of the target object, which can be further used as an input to control, e.g., the end effector of a robot. The major goals are tracking at camera frame rate and robustness. The latter is achieved by performing cue integration in order to compensate for weaknesses of individual cues. Therefore, features such as lines and ellipses are not only extracted from 2-D images, but the 3-D model and the pose of the object are exploited also.  相似文献   

15.
In this paper, a method to posture maintenance control of 2-link object by nonprehensile two-cooperative-arm robot without compensating friction is proposed. In details, a mathematical model of the 2-link object is firstly built. Based on the model, stable regions for holding motion of nonprehensile two-cooperative-arm robot are obtained while the 2-link object is kept stable on the robot arms with static friction. Among the obtained stable regions, the robust pairs of orientation angles of the 2-link object are found. Under the robust orientation angles, a feedback control system is designed to control the arms to maintain the 2-link object’s posture while it is being held or lifted up. Finally, experimental results are shown to verify the effectiveness of the proposed method.   相似文献   

16.
运动目标跟踪技术是未知环境下移动机器人研究领域的一个重要研究方向。该文提出了一种基于主动视觉和超声信息的移动机器人运动目标跟踪设计方法,利用一台SONY EV-D31彩色摄像机、自主研制的摄像机控制模块、图像采集与处理单元等构建了主动视觉系统。移动机器人采用了基于行为的分布式控制体系结构,利用主动视觉锁定运动目标,通过超声系统感知外部环境信息,能在未知的、动态的、非结构化复杂环境中可靠地跟踪运动目标。实验表明机器人具有较高的鲁棒性,运动目标跟踪系统运行可靠。  相似文献   

17.
针对单视角像机的视野局限性以及连续跟踪特定目标时运动目标实时检测的困难,提出了一种离散化活动像机运动的方法。先离散化像机运动;建立像机预置位表和背景索引表;然后,利用目标位置信息和运动信息判断像机转动情况,并结合对应的控制原理,实现了像机的调度;最后,利用通过计算机视觉方法确定了特定目标在不同离散空间的对应关系,实现了对特定目标大范围的主动跟踪和运动目标的实时检测。实验结果表明,对于复杂场景中特定运动目标大范围的主动跟踪具有较好的鲁棒性和实时性,并且可以实时提取场景中的运动区域。  相似文献   

18.
This paper focuses on the way to achieve accurate visual servoing tasks when the shape of the object being observed as well as the desired image are unknown. More precisely, we want to control the camera orientation with respect to the tangent plane at a certain object point corresponding to the center of a region of interest. We also want to observe this point at the principal point to fulfil a fixation task. A 3-D reconstruction phase must, therefore, be performed during the camera motion. Our approach is then close to the structure-from-motion problem. The reconstruction phase is based on the measurement of the 2-D motion in a region of interest and on the measurement of the camera velocity. Since the 2-D motion depends on the shape of the objects being observed, we introduce a unified motion model to cope both with planar and nonplanar objects. However, since this model is only an approximation, we propose two approaches to enlarge its domain of validity. The first is based on active vision, coupled with a 3-D reconstruction based on a continuous approach, and the second is based on statistical techniques of robust estimation, coupled with a 3-D reconstruction based on a discrete approach. Theoretical and experimental results compare both approaches.  相似文献   

19.
视觉伺服的乒乓球机器人系统作为典型的"手眼系统",是研究高速视觉感知和快速伺服运动的理想平台,其涉及的高速物体识别跟踪、快速精确轨迹预测及机械臂伺服准确回球等关键技术在工业、军事等领域有广泛的应用前景.本文提出了乒乓球机器人的高速视觉伺服系统实现方法,包括基于特征直方图统计和快速轮廓搜索的目标识别算法,基于模型参数学习和自适应模型调整的物体运动状态估计和轨迹预测算法,及基于轨迹预测的灵巧臂回球规划算法.通过实验验证了各算法的实时性和高效性,并在165cm高的仿人机器人"悟"和"空"上成功实现了双机器人对打和与人对打任务.  相似文献   

20.
In this article we present a new appearance-based approach for the classification and the localization of 3-D objects in complex scenes. A main problem for object recognition is that the size and the appearance of the objects in the image vary for 3-D transformations. For this reason, we model the region of the object in the image as well as the object features themselves as functions of these transformations. We integrate the model into a statistical framework, and so we can deal with noise and illumination changes. To handle heterogeneous background and occlusions, we introduce a background model and an assignment function. Thus, the object recognition system becomes robust, and a reliable distinction, which features belong to the object and which to the background, is possible. Experiments on three large data sets that contain rotations orthogonal to the image plane and scaling with together more than 100 000 images show that the approach is well suited for this task.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号