首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 750 毫秒
1.
蔡子豪  杨亮  黄之峰 《控制与决策》2023,38(10):2859-2866
针对机械臂在非结构环境中对未知物体抓取位姿生成困难及抓取稳定性差的问题,提出一种基于点云采样权重估计的抓取位姿生成方法.首先通过移动深度相机的方式拼接得到较完整的物体点云信息,并对物体的几何特性进行分析,有效避开物体不宜抓取的位置进行抓取位姿样本生成;然后结合几何约束条件实现抓取位姿搜索,并利用力封闭条件对样本稳定性进行评估;最后为了对实际的抓取位姿进行评价,根据其稳定性、夹取深度、夹取角度等设定抓取可行性指标,据此在工作空间输出最佳抓取位姿并完成指定的抓取任务.实验结果表明,采用所提方法能够高效生成大量且稳定的抓取位姿,并在仿真环境中有效实现机械臂对单个或多个随机摆放的未知物体的抓取任务.  相似文献   

2.
苏杰  张云洲  房立金  李奇  王帅 《机器人》2020,42(2):129-138
针对机器人在非结构化环境下面临的未知物体难以快速稳定抓取的问题,提出一种基于多重几何约束的未知物体抓取位姿估计方法.通过深度相机获取场景的几何点云信息,对点云进行预处理得到目标物体,利用简化的夹持器几何形状约束生成抓取位姿样本.然后,利用简化的力封闭约束对样本进行快速粗筛选.对抓取位姿的抓取几何轮廓进行力平衡约束分析,将稳定的位姿传送至机器人执行抓取.采用深度相机与6自由度机械臂组成实验平台,对不同姿态形状的物体进行抓取实验.实验结果表明,本文方法能够有效应对物体种类繁多、缺乏3维模型的情况,在单目标和多目标场景均具有良好的适用性.  相似文献   

3.
刘汉伟  曹雏清  王永娟 《机器人》2019,41(5):583-590
针对现实生活中的非结构化抓取环境,提出一种基于非规则物体基本形体组成的自主抓取方法.机器人自主抓取的关键不仅仅在于对物体类型的识别,更大一部分在于对物体形状(例如形状基元的组成)判断后的良好抓取.将不规则的复杂物体简化为一些简单物体的组合,利用基于特征点和核心提取的网格分割(MFC)算法将被抓取物体3D数据点分割为主体和分支部分,依据最优拟合算法将各部分拟合为球体、椭球体、圆柱体、平行六面体中的一种,并依据简化结果对抓取位姿进行约束,再对简化后物体进行抓取训练,获取最优抓取框,从而实现对未知物体的自主抓取.本文方法最终在Baxter机器人上实现了93.3%的抓取准确率.实验结果表明,该方法可应用于不同形状、不同位姿的未知非规则物体,鲁棒性较强.  相似文献   

4.
张云洲  李奇  曹赫  王帅  陈昕 《控制与决策》2021,36(8):1815-1824
针对机械臂对尺寸变换、形状各异、任意位姿的未知物体抓取,提出一种基于多层级特征的单阶段抓取位姿检测算法,将物体抓取位姿检测问题视为抓取角度分类和抓取位置回归进行处理,对抓取角度和抓取位置执行单次预测.首先,利用深度数据替换RGB图像的B通道,生成RGD图像,采用轻量型特征提取器VGG16作为主干网络;其次,针对VGG16特征提取能力较弱的问题,利用Inception模块设计一种特征提取能力更强的网络模型;再次,在不同层级的特征图上,利用先验框的方法进行抓取位置采样,通过浅层特征与深层特征的混合使用提高模型对尺寸多变的物体的适应能力;最后,输出置信度最高的检测结果作为最优抓取位姿.在image-wise数据集和object-wise数据集上,所提出算法的评估结果分别为$95.71$%和$94.01$%,检测速度为58.8FPS,与现有方法相比,在精度和速度上均有明显的提升.  相似文献   

5.
针对机械臂抓取过程中场景的复杂性和存在遮挡问题,提出了一种基于深度相机的物体位姿估计方法。采用Kinect相机获取点云图像,提取点云的FPFH特征。采用奇异值分解算法和随机一致算法来进行位姿估计。将得到的位姿经过手眼转换转换为抓取位姿。通过与ICP算法和NDT算法进行对比实验,结果验证了该方法的稳定性和精确性。  相似文献   

6.
张森彦  田国会  张营  刘小龙 《机器人》2020,42(5):513-524
针对未知不规则物体在堆叠场景下的抓取任务,提出一种基于二阶段渐进网络(two-stage progressive network,TSPN)的自主抓取方法.首先利用端对端策略获取全局可抓性分布,然后基于采样评估策略确定最优抓取配置.将以上2种策略融合,使得TSPN的结构更加精简,显著减少了需评估样本的数量,能够在保证泛化能力的同时提升抓取效率.为了加快抓取模型学习进程,引入一种先验知识引导的自监督学习策略,并利用220种不规则物体进行抓取学习.在仿真和真实环境下分别进行实验,结果表明该抓取模型适用于多物体、堆叠物体、未知不规则物体、物体位姿随机等多种抓取场景,其抓取准确率和探测速度较其他基准方法有明显提升.整个学习过程历时10天,结果表明使用先验知识引导的学习策略能显著加快学习进程.  相似文献   

7.
在许多自动化应用场合中,如分拣和上、下料等过程中,机械臂抓取是非常重要的一个环 节。在有遮挡或物体杂乱放置的情况下,对物体进行可靠、快速和精确计算位姿是机械臂抓取的难题之一。该文提出一种针对非规则目标的 3D 视觉引导抓取系统。首先,该系统运用面结构光系统对目标进行高精度三维重建,并建立离线 3D 点云模板库;然后,将标准模板与点云预处理后的场景点云进行匹配,得到匹配参数后,由坐标系之间的转换矩阵计算机器手抓取位姿;最后,引导机器手完成对目标物体的抓取。实验结果表明,所开发的机械臂抓取系统能够对非规则目标进行可靠、快速和精确的 抓取。  相似文献   

8.
针对工业上常见的弱纹理、散乱堆叠的物体的检测和位姿估计问题,提出了一种基于实例分割网络与迭代优化方法的工件识别抓取系统.该系统包括图像获取、目标检测和位姿估计3个模块.图像获取模块中,设计了一种对偶RGB-D相机结构,通过融合3张深度图像来获得更高质量的深度数据;目标检测模块对实例分割网络Mask R-CNN(region-based convolutional neural network)进行了改进,同时以彩色图像和包含3维信息的HHA(horizontal disparity,height above ground,angle with gravity)特征作为输入,并在其内部增加了STN(空间变换网络)模块,提升对弱纹理物体的分割性能,结合点云信息分割目标点云;在目标检测模块的基础上,位姿估计模块利用改进的4PCS(4-points congruent set)算法和ICP(迭代最近点)算法将分割出的点云和目标模型的点云进行匹配和位姿精修,得到最终位姿估计的结果,机器人根据此结果完成抓取动作.在自采工件数据集上和实际搭建的分拣系统上进行实验,结果表明,该抓取系统能够对不同形状、弱纹理、散乱堆叠的物体实现快速的目标识别和位姿估计,位置误差可达1 mm,角度误差可达1°,其性能可满足实际应用的要求.  相似文献   

9.
针对机械臂抓取检测任务中对未知物体抓取位姿检测精度低、耗时长等问题,提出一种融入注意力机制多模特征抓取位姿检测网络.首先,设计多模态特征融合模块,在融合多模态特征同时对其赋权加强;然后,针对较浅层残差网络提取重点特征能力较弱的问题,引入卷积注意力模块,进一步提升网络特征提取能力;最后,通过全连接层对提取特征直接进行回归拟合,得到最优抓取检测位姿.实验结果表明,在Cornell公开抓取数据集上,所提出算法的图像拆分检测精度为98.9%,对象拆分检测精度为98.7%,检测速度为51FPS,对10类物体的100次真实抓取实验中,成功率为95%.  相似文献   

10.
刘亚欣  王斯瑶  姚玉峰  杨熹  钟鸣 《控制与决策》2020,35(12):2817-2828
作为机器人在工厂、家居等环境中最常用的基础动作,机器人自主抓取有着广泛的应用前景,近十年来研究人员对其给予了较高的关注,然而,在非结构环境下任意物体任意姿态的准确抓取仍然是一项具有挑战性和复杂性的研究.机器人抓取涉及3个主要方面:检测、规划和控制.作为第1步,检测物体并生成抓取位姿是成功抓取的前提,有助于后续抓取路径的规划和整个抓取动作的实现.鉴于此,以检测为主进行文献综述,从分析法和经验法两大方面介绍抓取检测技术,从是否具有抓取物体先验知识的角度出发,将经验法分成已知物体和未知物体的抓取,并详细描述未知物体抓取中每种分类所包含的典型抓取检测方法及其相关特点.最后展望机器人抓取检测技术的发展方向,为相关研究提供一定的参考.  相似文献   

11.
Humans can instinctively predict whether a given grasp will be successful through visual and rich haptic feedback. Towards the next generation of smart robotic manufacturing, robots must be equipped with similar capabilities to cope with grasping unknown objects in unstructured environments. However, most existing data-driven methods take global visual images and tactile readings from the real-world system as input, making them incapable of predicting the grasp outcomes for cluttered objects or generating large-scale datasets. First, this paper proposes a visual-tactile fusion method to predict the results of grasping cluttered objects, which is the most common scenario for grasping applications. Concretely, the multimodal fusion network (MMFN) uses the local point cloud within the gripper as the visual signal input, while the tactile signal input is the images provided by two high-resolution tactile sensors. Second, collecting data in the real world is high-cost and time-consuming. Therefore, this paper proposes a digital twin-enabled robotic grasping system to collect large-scale multimodal datasets and investigates how to apply domain randomization and domain adaptation to bridge the sim-to-real transfer gap. Finally, extensive validation experiments are conducted in physical and virtual environments. The experimental results demonstrate the effectiveness of the proposed method in assessing grasp stability for cluttered objects and performing zero-shot sim-to-real policy transfer on the real robot with the aid of the proposed migration strategy.  相似文献   

12.
夏晶  钱堃  马旭东  刘环 《机器人》2018,40(6):794-802
针对任意姿态的未知不规则物体,提出一种基于级联卷积神经网络的机器人平面抓取位姿快速检测方法.建立了一种位置-姿态由粗到细的级联式两阶段卷积神经网络模型,利用迁移学习机制在小规模数据集上训练模型,以R-FCN(基于区域的全卷积网络)模型为基础提取抓取位置候选框进行筛选及角度粗估计,并针对以往方法在姿态检测上的精度不足,提出一种Angle-Net模型来精细估计抓取角度.在Cornell数据集上的测试及机器人在线抓取实验结果表明,该方法能够对任意姿态、不同形状的不规则物体快速计算最优抓取点及姿态,其识别准确性和快速性相比以往方法有所提高,鲁棒性和稳定性强,且能够泛化适应未训练过的新物体.  相似文献   

13.
In this paper, we present a strategy for fast grasping of unknown objects based on the partial shape information from range sensors for a mobile robot with a parallel-jaw gripper. The proposed method can realize fast grasping of an unknown object without needing complete information of the object or learning from grasping experience. Information regarding the shape of the object is acquired by a 2D range sensor installed on the robot at an inclined angle to the ground. Features for determining the maximal contact area are extracted directly from the partial shape information of the unknown object to determine the candidate grasping points. Note that since the shape and mass are unknown before grasping, a successful and stable grasp cannot be in fact guaranteed. Thus, after performing a grasping trial, the mobile robot uses the 2D range sensor to judge whether the object can be lifted. If a grasping trial fails, the mobile robot will quickly find other candidate grasping points for another trial until a successful and stable grasp is realized. The proposed approach has been tested in experiments, which found that a mobile robot with a parallel-jaw gripper can successfully grasp a wide variety of objects using the proposed algorithm. The results illustrate the validity of the proposed algorithm in term of the grasping time.  相似文献   

14.
目的 杂乱场景下的物体抓取姿态检测是智能机器人的一项基本技能。尽管六自由度抓取学习取得了进展,但先前的方法在采样和学习中忽略了物体尺寸差异,导致在小物体上抓取表现较差。方法 提出了一种物体掩码辅助采样方法,在所有物体上采样相同的点以平衡抓取分布,解决了采样点分布不均匀问题。此外,学习时采用多尺度学习策略,在物体部分点云上使用多尺度圆柱分组以提升局部几何表示能力,解决了由物体尺度差异导致的学习抓取操作参数困难问题。通过设计一个端到端的抓取网络,嵌入了提出的采样和学习方法,能够有效提升物体抓取检测性能。结果 在大型基准数据集GraspNet-1Billion上进行评估,本文方法取得对比方法中的最优性能,其中在小物体上的抓取指标平均提升了7%,大量的真实机器人实验也表明该方法具有抓取未知物体的良好泛化性能。结论 本文聚焦于小物体上的抓取,提出了一种掩码辅助采样方法嵌入到提出的端到端学习网络中,并引入了多尺度分组学习策略提高物体的局部几何表示,能够有效提升在小尺寸物体上的抓取质量,并在所有物体上的抓取评估结果都超过了对比方法。  相似文献   

15.
The gentle grasping and manipulation of objects in dense un-structured environments, such as the agricultural, food processing, or home environments constitute a formidable challenge for robotic systems. Knowledge regarding wrist poses (wrist positions and orientations) that may lead to successful grasps is especially important in such environments for both gripper design and online grasp planning. Graspability maps store grasp quality grades at different wrist poses in object-centered coordinates. Previously graspability maps were derived based on object models in a lengthy, offline process and thus had limited usability. We have developed geometry-based grasp quality measures related to classical grasp quality measures, which can be determined directly from a 3D point cloud. This facilitates embedding agent perception capabilities within the grasp quality determination. Additionally by scanning the object’s surface for finger contact points rather than scanning the volume of the bounding box about the object, and by using parallel computation, graspability map computation-time is considerably reduced, facilitating online computation of multiple measures. We validate the developed measures in a physical environment, show that computation-time can be reduced by more than 90% with very low reduction in map quality, and show the applicability of the developed methods for both simple and complex objects.  相似文献   

16.
目的 雷达点云语义分割是3维环境感知的重要环节,准确分割雷达点云对象对无人驾驶汽车和自主移动机器人等应用具有重要意义。由于雷达点云数据具有非结构化特征,为提取有效的语义信息,通常将不规则的点云数据投影成结构化的2维图像,但会造成点云数据中几何信息丢失,不能得到高精度分割效果。此外,真实数据集中存在数据分布不均匀问题,导致小样本物体分割效果较差。为解决这些问题,本文提出一种基于稀疏注意力和实例增强的雷达点云分割方法,有效提高了激光雷达点云语义分割精度。方法 针对数据集中数据分布不平衡问题,采用实例注入方式增强点云数据。首先,通过提取数据集中的点云实例数据,并在训练中将实例数据注入到每一帧点云中,实现实例增强的效果。由于稀疏卷积网络不能获得较大的感受野,提出Transformer模块扩大网络的感受野。为了提取特征图的关键信息,使用基于稀疏卷积的空间注意力机制,显著提高了网络性能。另外,对不同类别点云对象的边缘,提出新的TVloss用于增强网络的监督能力。结果 本文提出的模型在SemanticKITTI和nuScenes数据集上进行测试。在SemanticKITTI数据集上,本文方法在线单帧...  相似文献   

17.
Grasp detection based on deep learning is an important method for robots to accurately perceive unstructured environments. However, the deep learning method widely used in general object detection is not suitable for robotic grasp detection. Multi-stage network is often designed to meet the requirements of grasp posture, but they increase computation complexity. This paper proposes a single-stage robotic grasp detection method by using region proposal networks. The proposed method generates multiple oriented reference anchors firstly. The grasp rectangles are then regressed and classified based on these anchors. A new matching strategy for oriented anchors is proposed based on the rotation angles and center positions of the anchors. The well-known Cornell grasp dataset and Jacquard dataset are used to test the performance of the proposed method. Experimental results show that the proposed method can achieve higher grasp detection accuracy compared with other methods in the literature.  相似文献   

18.
We suggest a method to directly deep‐learn light transport, i. e., the mapping from a 3D geometry‐illumination‐material configuration to a shaded 2D image. While many previous learning methods have employed 2D convolutional neural networks applied to images, we show for the first time that light transport can be learned directly in 3D. The benefit of 3D over 2D is, that the former can also correctly capture illumination effects related to occluded and/or semi‐transparent geometry. To learn 3D light transport, we represent the 3D scene as an unstructured 3D point cloud, which is later, during rendering, projected to the 2D output image. Thus, we suggest a two‐stage operator comprising a 3D network that first transforms the point cloud into a latent representation, which is later on projected to the 2D output image using a dedicated 3D‐2D network in a second step. We will show that our approach results in improved quality in terms of temporal coherence while retaining most of the computational efficiency of common 2D methods. As a consequence, the proposed two stage‐operator serves as a valuable extension to modern deferred shading approaches.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号