首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 171 毫秒
1.
基于三维点云的同时定位与建图(simultaneous localization and mapping, SLAM)是机器人导航与定位领域重要的技术之一.然而具有回环检测功能的三维点云SLAM系统仍鲜见于文献中.本文首先提出了一种新的基于三维点云的室外SLAM系统的框架,该框架由里程计、回环检测、位姿优化3部分组成.其次针对回环检测,提出一种基于点云片段匹配约束的方法提升回环检测的效率.最后针对位姿优化,提出两种轨迹漂移优化算法,分别为全局一致性的回环调整算法和位姿预测和补偿算法.通过广泛的实验验证本文提出的方法,结果表明本文所提出的SLAM系统具有稳定和精确的位姿估计能力.  相似文献   

2.
为了完成非结构化环境中的机器人轴孔装配任务,提出了一种融入模糊奖励机制的深度确定性策略梯度(DDPG)变参数导纳控制算法,来提升未知环境下的装配效率。建立了轴孔装配接触状态力学模型,并开展轴孔装配机理研究,进而指导机器人装配策略的制定。基于导纳控制器实现柔顺轴孔装配,采用DDPG算法在线辨识控制器的最优参数,并在奖励函数中引入模糊规则,避免陷入局部最优装配策略,提高装配操作质量。在5种不同直径的孔上进行装配实验,并与定参数导纳模型装配效果进行比较。实验结果表明,本文算法明显优于固定参数模型,并在算法收敛后10步内可完成装配操作,有望满足非结构环境自主操作需求。  相似文献   

3.
无人机在灾后矿井的自主导航能力是其胜任抢险救灾任务的前提,而在未知三维空间的自主位姿估计技术是无人机自主导航的关键技术之一。目前基于视觉的位姿估计算法由于单目相机无法直接获取三维空间的深度信息且易受井下昏暗光线影响,导致位姿估计尺度模糊和定位性能较差,而基于激光的位姿估计算法由于激光雷达存在视角小、扫描图案不均匀及受限于矿井场景结构特征,导致位姿估计出现错误。针对上述问题,提出了一种基于视觉与激光融合的井下灾后救援无人机自主位姿估计算法。首先,通过井下无人机搭载的单目相机和激光雷达分别获取井下的图像数据和激光点云数据,对每帧矿井图像数据均匀提取ORB特征点,使用激光点云的深度信息对ORB特征点进行深度恢复,通过特征点的帧间匹配实现基于视觉的无人机位姿估计。其次,对每帧井下激光点云数据分别提取特征角点和特征平面点,通过特征点的帧间匹配实现基于激光的无人机位姿估计。然后,将视觉匹配误差函数和激光匹配误差函数置于同一位姿优化函数下,基于视觉与激光融合来估计井下无人机位姿。最后,通过视觉滑动窗口和激光局部地图引入历史帧数据,构建历史帧数据和最新估计位姿之间的误差函数,通过对误差函数的非线性优化...  相似文献   

4.
物体位姿估计是机器人在散乱环境中实现三维物体拾取的关键技术,然而目前多数用于物体位姿估计的深度学习方法严重依赖场景的RGB信息,从而限制了其应用范围。提出基于深度学习的六维位姿估计方法,在物理仿真环境下生成针对工业零件的数据集,将三维点云映射到二维平面生成深度特征图和法线特征图,并使用特征融合网络对散乱场景中的工业零件进行六维位姿估计。在仿真数据集和真实数据集上的实验结果表明,该方法相比传统点云位姿估计方法准确率更高、计算时间更短,且对于疏密程度不一致的点云以及噪声均具有更强的鲁棒性。  相似文献   

5.
沈策  王水明  沈宇昊  叶帅 《自动化应用》2023,(22):163-166+169
面对杂乱堆叠的散货物料实时检测场景,传统机器视觉会受物体遮挡限制,使得识别结果存在偏差。为解决此问题,本文运用三维点云算法,设计基于点对特征的位姿估计算法,同时调整场景点云中法线方向使其朝向点云成像设备视点,以优化其一致性,最后基于该算法构建实际物体抓取系统。结果显示,此次研究设计的位姿检测算法的平均重合率为98%,平均内点均方根误差为0.000 3 mm,点云匹配成功率为100%,抓取成功率为97.1%。结果表明,该基于点对特征的三维点云算法在散货物料场景中的检测准确率较高,且在抓取场景中也有较理想的表现,具有一定的实际应用意义。  相似文献   

6.
针对大视角情况下,移动机器人3维视觉同步定位与地图构建(visual simultaneous localization and mapping,V-SLAM)性能下降的问题,提出了一种仿射不变特征匹配算法AORB(affine oriented FAST and rotated BRIEF)并在此基础上构建了基于Kinect的移动机器人大视角3D V-SLAM系统.首先对Kinect相机采集到的彩色RGB数据采用AORB算法实现具有大视角变化的相邻帧图像之间的快速有效匹配以建立相邻帧之间的对应关系;然后根据Kinect相机标定得到的内外参数以及对准校正后的像素点深度值将2D图像点转换为3D彩色点云数据;接着结合随机抽样一致性算法(RANdom Sample Consensus,RANSAC)去除3D点云中的外点,并利用RANSAC的内点进行最小二乘算法下机器人相邻位姿的估计;最后采用g2o(general graph optimization)优化方法对机器人位姿进行优化,从而建立3D V-SLAM模型.最终实现了移动机器人大视角3D视觉SLAM.基于标准数据集的离线实验和基于真实环境的机器人在线实验结果表明,本文所提出的匹配算法和所构建的3D V-SLAM系统在大视角情况下能完成局部模型的准确更新,成功地重构出环境模型并有效地估计出机器人的运动轨迹.  相似文献   

7.
为了提升复杂环境中双目视觉里程计的精度,提出一种考虑多位姿估计约束的双目视觉里程计方法.首先,分别建立匹配深度已知点与深度未知点的数学模型,将深度未知点引入2D-2D位姿估计模型,从而充分利用图像信息;然后,基于关键帧地图点改进3D-2D位姿估计模型,并结合当前帧地图点更新关键帧地图点,从而增加匹配点对数,提高位姿估计精度;最后,根据改进的2D-2D及3D-2D位姿估计模型,建立多位姿估计约束位姿估计模型,结合局部光束平差法对位姿估计进行局部优化,达到定位精度高且累积误差小的效果.数据集实验和实际场景在线实验表明,所提出方法满足实时定位要求,且有效地提高了自主定位精度.  相似文献   

8.
基于地图匹配的无人车定位方法的精度取决于已创建地图的精度,受外界的影响较小,适用于复杂场景下的无人车定位。然而目前采用的激光雷达点云匹配算法是以单一的特征为核心进行匹配,对于大规模点云匹配准确率较低,导致三维点云地图与实际环境偏差较大,造成基于地图匹配的无人车定位方法精度不高的问题。针对上述问题,提出了一种基于三维点云地图和误差状态卡尔曼滤波(ESKF)的无人车融合定位方法。该方法由三维点云地图构建和ESKF融合定位2个部分组成。在三维点云地图构建部分,通过正态分布变换(NDT)算法进行帧间点云匹配,提高大规模点云匹配准确率,并在根据激光里程计数据建立的位姿图顶点和约束边的基础上添加闭环约束构建图优化问题,采用列文伯格-马夸尔特(LM)算法进行求解,以减少位姿的累计漂移,提高三维点云地图精度。在ESKF融合定位部分,采用ESKF融合惯性测量单元(IMU)数据和三维点云地图数据,实现对无人车先验位姿(位置、姿态和速度)的修正并输出后验位姿。实验结果表明,与基于地图匹配的定位方法相比,该方法定位轨迹相对位姿误差最大值减小了0.176 9 m,平均误差减小了0.027 1 m,均方根误差减小...  相似文献   

9.
基于多传感器的大口径器件自动对准策略   总被引:1,自引:0,他引:1  
卢金燕  徐德  覃政科  王鹏  任超 《自动化学报》2015,41(10):1711-1722
针对大口径器件的装配, 基于搭建的实验平台, 提出了一种多传感器反馈的分阶段自动对准策略, 实现了大口径器件的六自由度位姿对准. 对准过程中, 在机器人末端远离装配位置时, 采用视觉测量安装框架的相对位姿进行粗对准; 在机器人末端接近装配位置时, 由于安装框架尺寸大导致视觉不能获得完整的框架相对于大口径器件的位姿, 所以采用视觉采集安装框架的局部图像, 利用基于图像的控制消除绕Z轴的旋转误差和沿X、Y轴的平移误差, 采用多个激光测距传感器测量相对距离, 利用基于位置的控制消除沿Z轴的平移误差和绕X、Y轴的旋转误差, 实现大口径器件与安装框架的精对准. 采用增量式PI控制算法, 实现了对准的运动控制. 实验结果验证了所提方法的有效性.  相似文献   

10.
本文针对复杂汽车生产制造环境中,多种类、多目标散乱堆叠场景下的机器人抓取问题,建立对油气分离器等四种不同配件的智能识别无序抓取系统;利用Photoneo公司的3D Scanner-M面结构光传感器采集目标的三维点云数据、大族机器人Elfin-E10为机械操作手臂,设计了6D位姿估计深度学习网络模型,并利用实例分割网络将点云的前景点和背景点解耦,进而将实例聚类,生成统一的点云切片,送入后续网络进行位姿估计。实验是通过在实际精确抓装项目任务中的应用,分别在MFC和ROS-QT软件架构中搭建了基于传统点云位姿估计方法和深度网络位姿估计方法的软件平台,并在实际场景中完成了对油气分离器等工件的全过程抓取,对6D位姿估计的深度网络的有效性进行了对比实验与验证。结果表明,本文提出的位姿估计识别方法在完成对目标工件的抓取上拥有较高的成功率。  相似文献   

11.
宋薇  仇楠楠  沈林勇  章亚男 《机器人》2018,40(6):950-957
为实现通用性强、快速、准确的工业机器人6自由度零件抓取,提出了一种基于单目视觉引导的零件3维抓取方法.首先,采用按倾角分层的Chamfer距离匹配算法建立图像与待匹配模板的相似度函数,并运用爬山法局部优化的遗传算法搜索最优匹配结果;然后,通过CAD(计算机辅助设计)模型建立离线3D模板库,将匹配算法拓展到适用于复杂结构零件的空间6自由度位姿检测;最后,由各坐标系间的矩阵转换和系统标定得到机器人的抓取信息,从而实现零件的3维抓取.实验结果表明,优化后的位姿检测算法在匹配速度和准确性上均有所提升,且基于该检测算法的机器人3维抓取实验的位置误差在2 mm以内、转角误差在2°以内,可用于工业智能机器人的零件抓取.  相似文献   

12.
针对工业生产中自动装配技术装配精度不高的问题,提出了一种基于机器视觉和六维力传感器的自动装配控制方法。使用两个单目摄像头对目标进行两次定位,通过改进的NCC匹配算法进行一次定位,通过基于边缘特征的模板匹配进行二次定位,利用六维力传感器获取装配过程中的力与力矩变化情况,并基于反馈的力与力矩提出了直线运动与螺旋线运动两种装配轨迹规划策略。在搭建的机器人平台上对两种策略进行了对比实验,实验结果表明,在轴孔间隙较大时直线运动装配效率较高,但当轴孔间隙小于0. 1 mm时,直线运动的装配效率和成功率均大幅下降,而螺旋线运动的装配时间主要与装配孔直径有关,对于不同轴孔间隙装配表现稳定,并能以较高成功率实现精度0. 05 mm的轴孔装配。  相似文献   

13.
Registration of 3D data is a key problem in many applications in computer vision, computer graphics and robotics. This paper provides a family of minimal solutions for the 3D-to-3D registration problem in which the 3D data are represented as points and planes. Such scenarios occur frequently when a 3D sensor provides 3D points and our goal is to register them to a 3D object represented by a set of planes. In order to compute the 6 degrees-of-freedom transformation between the sensor and the object, we need at least six points on three or more planes. We systematically investigate and develop pose estimation algorithms for several configurations, including all minimal configurations, that arise from the distribution of points on planes. We also identify the degenerate configurations in such registrations. The underlying algebraic equations used in many registration problems are the same and we show that many 2D-to-3D and 3D-to-3D pose estimation/registration algorithms involving points, lines, and planes can be mapped to the proposed framework. We validate our theory in simulations as well as in three real-world applications: registration of a robotic arm with an object using a contact sensor, registration of planar city models with 3D point clouds obtained using multi-view reconstruction, and registration between depth maps generated by a Kinect sensor.  相似文献   

14.
针对现有迭代最近点(ICP)头姿估计算法存在迭代次数偏多且易陷于局部最优、而随机森林(RF)头姿估计算法准确性和稳定性不高的问题,提出一种新的头姿估计改进方法,并基于该改进方法构建机器人轮椅实时交互控制接口.首先,分析现有迭代最近点头姿算法与随机森林头姿算法在准确性、实时性及稳定性方面存在的问题,并提出一种新的基于随机森林与迭代最近点算法融合的头姿估计改进方法;其次,为实现头姿估计到机器人轮椅交互控制的无缝连接,建立基于传统机器人轮椅操纵杆的头部姿态运动空间映射;最后,在基于标准头姿数据库分析改进头姿估计方法性能的基础上,构建机器人轮椅实验平台并规划运动轨迹,以进一步验证基于改进头姿估计方法的人机交互接口在机器人轮椅实时控制方面的有效性.实验结果表明,改进后的头姿估计方法较传统迭代最近点算法减少了迭代次数且避免了陷于局部最优,在仅增加少量运算时间的基础上,其准确性和稳定性都优于传统随机森林算法;同时,基于改进头姿估计方法的人机交互接口亦能实时平稳地控制机器人轮椅沿既定的轨迹运动.  相似文献   

15.
Earthwork operations are crucial parts of most construction projects. Heavy construction equipment and workers are often required to work in limited workspaces simultaneously. Struck-by accidents resulting from poor worker and equipment interactions account for a large proportion of accidents and fatalities on construction sites. The emerging technologies based on computer vision and artificial intelligence offer an opportunity to enhance construction safety through advanced monitoring utilizing site cameras. A crucial pre-requisite to the development of safety monitoring applications is the ability to identify accurately and localize the position of the equipment and its critical components in 3D space. This study proposes a workflow for excavator 3D pose estimation based on deep learning using RGB images. In the proposed workflow, an articulated 3D digital twin of an excavator is used to generate the necessary data for training a 3D pose estimation model. In addition, a method for generating hybrid datasets (simulation and laboratory) for adapting the 3D pose estimation model for various scenarios with different camera parameters is proposed. Evaluations prove the capability of the workflow in estimating the 3D pose of excavators. The study concludes by discussing the limitations and future research opportunities.  相似文献   

16.
In industrial fields, precise pose of a 3D workpiece can guide operations like grasping and assembly tasks, thus precise estimation of pose of a 3D workpiece has received intensive attention over the last decades. When utilizing vision system as the source of pose estimation, it is difficult to get the pose of a 3D workpiece from the 2D image data provided by the vision system. Conventional methods face the complexity of model construction and time consumption on geometric matching. To overcome these difficulties, this paper proposes a search-based method to determine appropriate model and pose of a 3D workpiece that match the 2D image data. Concretely, we formulate the above problem as an optimization problem aiming at finding appropriate model parameters and pose parameters which minimizes the error between the notional 2D image (given by the model/pose parameters being optimized) and the real 2D image (provided by the vision system). Due to the coupling of model and pose parameters and discontinuity of the objective function, the above optimization problem cannot be tackled by conventional optimization techniques. Hence, we employ an evolutionary algorithm to cope with the optimization problem, where the evolutionary algorithm utilizes our problem-specific knowledge and adopts a hierarchical coarse-to-fine style to meet the requirement of online estimation. Experimental results demonstrate that our method is effective and efficient.  相似文献   

17.
3D object pose estimation for robotic grasping and manipulation is a crucial task in the manufacturing industry. In cluttered and occluded scenes, the 6D pose estimation of the low-textured or textureless industrial object is a challenging problem due to the lack of color information. Thus, point cloud that is hardly affected by the lighting conditions is gaining popularity as an alternative solution for pose estimation. This article proposes a deep learning-based pose estimation using point cloud as input, which consists of instance segmentation and instance point cloud pose estimation. The instance segmentation divides the scene point cloud into multiple instance point clouds, and each instance point cloud pose is accurately predicted by fusing the depth and normal feature maps. In order to reduce the time consumption of the dataset acquisition and annotation, a physically-simulated engine is constructed to generate the synthetic dataset. Finally, several experiments are conducted on the public, synthetic and real datasets to verify the effectiveness of the pose estimation network. The experimental results show that the point cloud based pose estimation network can effectively and robustly predict the poses of objects in cluttered and occluded scenes.  相似文献   

18.
In this study, we proposed a high-density three-dimensional (3D) tunnel measurement method, which estimates the pose changes of cameras based on a point set registration algorithm regarding 2D and 3D point clouds. To detect small deformations and defects, high-density 3D measurements are necessary for tunnel construction sites. The line-structured light method uses an omnidirectional laser to measure a high-density cross-section point cloud from camera images. To estimate the pose changes of cameras in tunnels, which have few textures and distinctive shapes, cooperative robots are useful because they estimate the pose by aggregating relative poses from the other robots. However, previous studies mounted several sensors for both the 3D measurement and pose estimation, increasing the size of the measurement system. Furthermore, the lack of 3D features makes it difficult to match point clouds obtained from different robots. The proposed measurement system consists of a cross-section measurement unit and a pose estimation unit; one camera was mounted for each unit. To estimate the relative poses of the two cameras, we designed a 2D–3D registration algorithm for the omnidirectional laser light, and implemented hand-truck and unmanned aerial vehicle systems. In the measurement of a tunnel with a width of 8.8 m and a height of 6.4 m, the error of the point cloud measured by the proposed method was 162.8 and 575.3 mm along 27 m, respectively. In a hallway measurement, the proposed method generated less errors in straight line shapes with few distinctive shapes compared with that of the 3D point set registration algorithm with Light Detection and Ranging.  相似文献   

19.
Currently, the robotic welding of medium-thickness plate structural parts has become a common welding application. With the rapid development of automation technology and robotics, the traditional teaching-playback mode and the off-line programming mode cannot meet the automation demand of welding robots. To realize automatic seam extraction and path planning for robotic welding of medium-thickness plate structural parts without programming and teaching, we use three models of medium-thickness plate structural parts as the research objects to propose a novel seam extraction and path planning method for robotic welding of medium-thickness plate structural parts based on 3D vision. Firstly, a set of improved RANSAC multiplanes fitting algorithms is proposed to accurately obtain the position of the intersection lines between the intersecting planes of the point cloud model. On this basis, we combine the geometric features of three models to propose the specific welding seam extraction methods respectively. Then, according to the spatial structure of the welding seams and the welding process, we carry out the welding path planning. Finally, a welding pose planning method based on the dihedral structure is proposed. Experiment results show that the proposed method can well realize the welding seam extraction, welding path and posture planning of medium-thickness plate structural parts without programming and teaching.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号