首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 234 毫秒
1.
刘亚欣  王斯瑶  姚玉峰  杨熹  钟鸣 《控制与决策》2020,35(12):2817-2828
作为机器人在工厂、家居等环境中最常用的基础动作,机器人自主抓取有着广泛的应用前景,近十年来研究人员对其给予了较高的关注,然而,在非结构环境下任意物体任意姿态的准确抓取仍然是一项具有挑战性和复杂性的研究.机器人抓取涉及3个主要方面:检测、规划和控制.作为第1步,检测物体并生成抓取位姿是成功抓取的前提,有助于后续抓取路径的规划和整个抓取动作的实现.鉴于此,以检测为主进行文献综述,从分析法和经验法两大方面介绍抓取检测技术,从是否具有抓取物体先验知识的角度出发,将经验法分成已知物体和未知物体的抓取,并详细描述未知物体抓取中每种分类所包含的典型抓取检测方法及其相关特点.最后展望机器人抓取检测技术的发展方向,为相关研究提供一定的参考.  相似文献   

2.
丁祥峰  孙怡宁  卢朝洪  骆敏舟 《控制工程》2005,12(4):302-304,309
对融合了视觉、滑觉、角位移等多种传感器的欠驱动空间机械手爪,研究其对不同形状、质地的物体实现自适应抓取控制。通过传感器反馈控制机械手运动、抓取力,提高机械手的自主能力。在抓取模式选择中,采用基于专家系统的抓取规划,根据物体不同的形状、尺寸选择不同的抓取模式;在抓取力控制中,通过由PVDF制作的滑觉传感器反馈,采用基于滑觉信号的模糊控制方法,对不同质地的物体选择不同的控制参数。通过实验研究验证基于多感知的控制方法对各种物体可以进行可靠的抓取。  相似文献   

3.
伴随着科技的发展,人工智能技术得到了飞速发展。协作机器人和机械臂在各大领域的应用变得越来越重要。传统的机械臂只能按照已规划好的路径进行物体的抓取,当物体不在或挪动位置时,抓取便无法正常完成,还需要重新规划路径,严重影响了工作效率。为了能够使机械臂自主获取外界信息,拥有自我感知周围环境的能力,基于视觉的协作机器人得到了快速发展。文中首先对基于视觉的协作机器人技术进行了简要介绍,随后对智能视觉系统进行了相关设计,接着介绍了智能视觉目标检测和识别模块,最后在周围环境以及外界信息未知的情况下,利用智能视觉系统实现了协作机器人对具体目标的识别定位和抓取。实验结果表明,该系统可以投用到实际的工程应用中,具有良好的使用价值。  相似文献   

4.
机器人多指操作的递阶控制   总被引:1,自引:0,他引:1  
为机器人多指协调操作建立一递阶控制系统.给定一操作任务,任务规划器首先生 成一系列物体的运动速度;然后,协调运动规划器根据期望的物体运动速度生成期望的手指 运动速度和期望的抓取姿态变化;同时,抓取力规划器为平衡作用在物体上的外力,根据当前 的抓取姿态,生成各手指所需的抓取力;最后,系统将手指的期望运动速度与为实现期望抓取 力而生成的顺应速度合并,并通过手指的逆雅可比转化为手指关节运动速度后,由手指的关 节级运动控制器实现手指的运动和抓取力的控制.该控制方法已成功应用于香港科技大学 (HKUST)灵巧手控制系统的开发.实验证明该方法不仅能完成物体轨迹的跟踪控制任务, 而且能完成物体对环境的力控制和力与速度的混合控制.  相似文献   

5.
崔涛  李凤鸣  宋锐  李贻斌 《控制与决策》2022,37(6):1445-1452
针对机器人在多类别物体不同任务下的抓取决策问题,提出基于多约束条件的抓取策略学习方法.该方法以抓取对象特征和抓取任务属性为机器人抓取策略约束,通过映射人类抓取习惯规划抓取模式,并采用物体方向包围盒(OBB)建立机器人抓取规则,建立多约束条件的抓取模型.利用深度径向基(DRBF)网络模型结合减聚类算法(SCM)实现抓取策略的学习,两种算法的结合旨在提高学习鲁棒性与精确性.搭建以Refiex 1型灵巧手和AUBO六自由度机械臂组成的实验平台,对多类别物体进行抓取实验.实验结果表明,所提出方法使机器人有效学习到对多物体不同任务的最优抓取策略,具有良好的抓取决策能力.  相似文献   

6.
为了提高分拣工作的效率、降低成本,设计了基于视觉识别的颜色分拣机器人。该机器人将OpenMV作为机器视觉的主要模块,机械臂作为运动模块,通过KPZ51核心系统板完成模块之间的信息互通,基于Lab颜色空间以及CamShift跟踪算法实现了颜色识别与跟踪、物体抓取、串口通信等多种技术。测试结果表明,改进后机器人识别的正确率从56%提高到了92%。颜色分拣机器人能够根据颜色对物体进行分拣。  相似文献   

7.
周思跃  龚振邦  袁俊 《计算机工程》2006,32(23):183-185
机器人灵巧手抓取方式控制是整个灵巧手操作规划一个非常重要的环节。该文介绍了3种典型的抓取方式:平行抓取、聚中抓取和镊式抓取。以被抓取物体的尺寸为输入量,抓取方式作为输出量,提出了一种基于模糊逻辑的灵巧手抓取控制算法,并对这种算法进行了推导。在实际的机器人灵巧手遥操作系统中的应用表明,这种基于模糊控制的灵巧手抓取方式控制方法是正确有效的,具有使用价值。  相似文献   

8.
针对机器人抓取过程中需要实时评估抓取质量以动态调整抓取构型的问题,提出了基于触觉先验知识的机器人稳定抓取方法.首先,根据抓取过程中物体抵抗外界扰动的能力,提出了一种基于触觉信息的抓取质量评估方法.在此基础上,建立了视触觉联合数据集并学习触觉先验知识.其次,提出了融合视觉图像和触觉先验知识的稳定抓取构型生成方法.最后,在搭建的机器人抓取系统中对10种目标物体进行了实验验证.结果表明,相比传统的视觉方法,该方法的抓取稳定性提高了55%;针对已知物体和未知物体,稳定抓取成功率分别为86%和79%,体现了较好的泛化效果.  相似文献   

9.
李玮  章逸丰  王鹏  熊蓉 《机器人》2019,41(2):165-174,184
提出了一种无需对目标物体进行预建模的迭代优化移动抓取规划方法.该方法通过点云相机在线对目标物体进行立体模型测量和建模,通过深度卷积神经网络对目标点云生成的多个候选抓取位置的抓取成功率进行评价.然后,对机器人底盘和手爪的位置和姿态进行迭代优化,直到抓取目标物体时机器人达到一个最优的位形.再用A*算法规划一条从机器人当前位置到目标位置的运动路径.最后,在路径的基础上,用一种启发式随机路径逼近算法规划手臂的运动,实现边走边抓的效果.本文的深度学习抓取成功率评估算法在康奈尔数据集上取得了83.3%的精确度.所提运动规划算法能得到更平滑、更短且更有利于后续运动的路径.  相似文献   

10.
针对机器人示教编程方法导致的工件位置固定、抓取效率低下的问题,研究神经网络在机器人视觉识别与抓取规划中的应用,建立了视觉引导方案,通过YOLOV5神经网络模型开发视觉识别系统,识别物体的种类,同时获取待抓取物体定位点坐标。提出了机器人六点手眼标定原理并进行标定实验,提出了针对俯视图为圆形或长方形物体的定位方法。最后针对3种物体进行了180次的抓取实验,实验的综合平均抓取成功率约为92.8%,验证了视觉识别和抓取机器人系统具备实际应用的可能性,有效提高了抓取效率。  相似文献   

11.
In this paper, we present visibility-based spatial reasoning techniques for real-time object manipulation in cluttered environments. When a robot is requested to manipulate an object, a collision-free path should be determined to access, grasp, and move the target object. This often requires processing of time-consuming motion planning routines, making real-time object manipulation difficult or infeasible, especially in a robot with a high DOF and/or in a highly cluttered environment. This paper places special emphasis on developing real-time motion planning, in particular, for accessing and removing an object in a cluttered workspace, as a local planner that can be integrated with a general motion planner for improved overall efficiency. In the proposed approach, the access direction of the object to grasp is determined through visibility query, and the removal direction to retrieve the object grasped by the gripper is computed using an environment map. The experimental results demonstrate that the proposed approach, when implemented by graphics hardware, is fast and robust enough to manipulate 3D objects in real-time applications.  相似文献   

12.
Vision based grasping holds great promise for grasping in dynamic environments where the object and/or robot are moving. The paper introduces a grasp planning approach for visual servo controlled robots with a single camera mounted at the end effector. Sensory control, mechanical, and geometrical issues in the design of such an automatic grasp planner are discussed and the corresponding constraints are highlighted. In particular, the integration of visual feature selection constraints is emphasized. Some quality measures are introduced to rate the candidate grasps. The grasp planning strategy and implementation issues in the development of an automatic grasp planner (AGP) are described. Simulation and experimental results are presented to verify the correctness of the approach and its effectiveness in dealing with dynamic situations.  相似文献   

13.
Neuro-psychological findings have shown that human perception of objects is based on part decomposition. Most objects are made of multiple parts which are likely to be the entities actually involved in grasp affordances. Therefore, automatic object recognition and robot grasping should take advantage from 3D shape segmentation. This paper presents an approach toward planning robot grasps across similar objects by part correspondence. The novelty of the method lies in the topological decomposition of objects that enables high-level semantic grasp planning.In particular, given a 3D model of an object, the representation is initially segmented by computing its Reeb graph. Then, automatic object recognition and part annotation are performed by applying a shape retrieval algorithm. After the recognition phase, queries are accepted for planning grasps on individual parts of the object. Finally, a robot grasp planner is invoked for finding stable grasps on the selected part of the object. Grasps are evaluated according to a widely used quality measure. Experiments performed in a simulated environment on a reasonably large dataset show the potential of topological segmentation to highlight candidate parts suitable for grasping.  相似文献   

14.
Neural-network based force planning for multifingered grasp   总被引:2,自引:0,他引:2  
The real-time control of multifingered grasp involves the problem of the force distribution which is usually underdetermined. It is known that the results of the force distribution are used to provide force or torque setpoints to the actuators, so they must be obtained in real-time. The objective of this paper is to develop a fast and efficient force planning method to obtain the desired joint torques which will allow multifingered hands to firmly grasp an object with arbitrary shape. In this paper, the force distribution problem in a multifingered hand is treated as a nonlinear mapping from the object size to joint torques. We represent the nonlinear mapping using artificial neural networks (ANNs), which allow us to deal with the complicated force planning strategy. A nonlinear programming method, which optimizes the contact forces of fingertips under the friction constraints, unisense force constraints and joint torque constraints, is presented to train the ANNs. The ANNs used for this research are based on the functional link (FL) network and the popular back-propagation (BP) network. It is found that the FL-network converges more quickly to the smaller error by comparing the training process of the two networks. The results obtained by simulation show that the FL-network is able to learn the complex nonlinear mapping to an acceptable level of accuracy and can be used as a real-time grasp planner.  相似文献   

15.
This article describes a manipulator assembly task planner that processes the knowledge of the working environment and generates a sequence of general, manipulator independent commands. the planner takes a very high level command, such as “insert PEG into HOLE” without further specifications, reasons about the involved object features using the information from the CAD system, and generates a process plan for the manipulator to automatically perform the task. In this planner, the grasp planning and the path planning are developed and implemented for a static world.  相似文献   

16.
Robotic grasping is very sensitive to how accurate is the pose estimation of the object to grasp. Even a small error in the estimated pose may cause the planned grasp to fail. Several methods for robust grasp planning exploit the object geometry or tactile sensor feedback. However, object pose range estimation introduces specific uncertainties that can also be exploited to choose more robust grasps. We present a grasp planning method that explicitly considers the uncertainties on the visually-estimated object pose. We assume a known shape (e.g. primitive shape or triangle mesh), observed as a–possibly sparse–point cloud. The measured points are usually not uniformly distributed over the surface as the object is seen from a particular viewpoint; additionally this non-uniformity can be the result of heterogeneous textures over the object surface, when using stereo-vision algorithms based on robust feature-point matching. Consequently the pose estimation may be more accurate in some directions and contain unavoidable ambiguities.The proposed grasp planner is based on a particle filter to estimate the object probability distribution as a discrete set. We show that, for grasping, some ambiguities are less unfavorable so the distribution can be used to select robust grasps. Some experiments are presented with the humanoid robot iCub and its stereo cameras.  相似文献   

17.
A pick-and-place operation in a 3-dimensional environment is a basic operation for humans and multi-purpose manipulators. However, there may be a difficult problem for such manipulators. Especially, if the object cannot be moved with a single grasp, regrasping, which can be a time-consuming process, should be carried out. Regrasping, given initial and final poses of the target object, is a construction of sequential transition of object poses that are compatible with two poses in the point of grasp configuration. This paper presents a novel approach for solving the regrasp problem. The approach consists of a preprocessing and a planning stage. Preprocessing, which is done only once for a given robot, generates a look-up table which has information on kinematically feasible task space of the end-effector throughout the entire workspace. Then, using this table, the planning automatically determines a possible intermediate location, pose and regrasp sequence leading from the pick-up to put-down grasp. With a redundant robot, it is shown experimentally that the presented method is complete in the entire workspace and can be implemented in real-time applications due to rapid regrasp planning time. The regrasp planner was combined with an existing path.  相似文献   

18.
Presents a novel approach to the problem of illumination planning for robust object recognition in structured environments. Given a set of objects, the goal is to determine the illumination for which the objects are most distinguishable in appearance from each other. Correlation is used as a measure of similarity between objects. For each object, a large number of images is automatically obtained by varying the pose and the illumination direction. Images of all objects together constitute the planning image set. The planning set is compressed using the Karhunen-Loeve transform to obtain a low-dimensional subspace, called the eigenspace. For each illumination direction, objects are represented as parametrized manifolds in the eigenspace. The minimum distance between the manifolds of two objects represents the similarity between the objects in the correlation sense. The optimal source direction is therefore the one that maximizes the shortest distance between the object manifolds. Several experiments have been conducted using real objects. The results produced by the illumination planner have been used to enhance the performance of an object recognition system  相似文献   

19.
We present an example-based planning framework to generate semantic grasps, stable grasps that are functionally suitable for specific object manipulation tasks. We propose to use partial object geometry, tactile contacts, and hand kinematic data as proxies to encode task-related constraints, which we call semantic constraints. We introduce a semantic affordance map, which relates local geometry to a set of predefined semantic grasps that are appropriate to different tasks. Using this map, the pose of a robot hand with respect to the object can be estimated so that the hand is adjusted to achieve the ideal approach direction required by a particular task. A grasp planner is then used to search along this approach direction and generate a set of final grasps which have appropriate stability, tactile contacts, and hand kinematics. We show experiments planning semantic grasps on everyday objects and applying these grasps with a physical robot.  相似文献   

20.
Machining process planning involves the formation of a set of directions describing the machining operations required to transform raw stock into a finished part. Conventional process planning, performed manually, relies on the knowledge and competence of an experienced process planner and tends to be time consuming and error prone. In the past two decades, much effort has been spent on improving process planning by utilizing the power of a computer to emulate the capabilities of an experienced planner. During the same period, computer-aided design (CAD) and computer-aided manufacturing (CAM) software has been developed to enhance design productivity and to assist the NC code generation facets of the machining process. The entire planning process may be automated be integrating CAD and CAM using computer-aided process planing (CAPP). The research described in this paper outlines the design and development of an intelligent CAPP system integrating two commercial CAD and CAM software packages, Autocad and Mastercam.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号