共查询到18条相似文献,搜索用时 406 毫秒
1.
一种新型的并联机器人位姿立体视觉检测系统 总被引:1,自引:0,他引:1
建立了一种并联机器人位姿立体视觉测量系统框架,主要包括图像采集与传输、摄像机标定、尺度不变量特征变换(SIFT)匹配、空间点重建和位姿测量五个部分。该系统基于SIFT,能够很好地处理图像在大视角有遮挡、平移、旋转、亮度和尺度变化时的特征点匹配,有较高的匹配精度,特别适用于对并联机器人多自由度和空间复杂运动的检测。最后使用该方法对并联机器人位姿检测做了仿真实验。 相似文献
2.
为精准、稳定、可重复地完成抓取动作,研究了弱光环境下基于稳定轻量级网络的仓库搬运机器人抓取控制方法。首先,针对搬运环境弱光图像,基于稳定轻量级编/解码网络提取抓取区域弱光特征并进行融合处理,获得正常光抓取区域特征;其次,在深度分离融合提取层中,通过处理正常光抓取区域特征,重构深层特征,从而恢复特征提取时丢失的细节信息;再次,在网络输出层内输入重构特征,输出仓库搬运机器人手爪的抓取位姿参数;最后,通过手眼标定得到的搬运目标图像,采集相机坐标系与机器人坐标系的坐标转换关系,将抓取位姿参数转换成仓库搬运机器人抓取控制量,完成对仓库搬运机器人的抓取控制。实验证明,该方法可有效提取搬运目标抓取区域特征,并可有效预测仓库搬运机器人抓取位姿,能完成仓库搬运机器人抓取控制,且抓取精度较高。 相似文献
3.
4.
5.
6.
研究了机器视觉技术在物体位姿检测中的应用.从两个方位架设工业相机采集目标图片,采用张正友基于2D平面靶标的相机标定方法对相机进行标定.使用目标图像减去背景图像和形态学运算的方法进行粗定位,初步检测目标区域.利用Hough变换检测目标直线边缘,依据极线约束原则、灰度相似性以及直线约束进行边缘直线及其端点的匹配.计算出各直线端点的空间三维坐标及其单位方向向量.在已知目标模板的条件下,目标的几何参数已知,利用简单的几何关系求取目标的位姿数据. 相似文献
7.
提出一种采用Kinect传感器作为视觉伺服的机器人辅助超声扫描系统,来规划引导机器人的扫描路线,以实现机器人辅助的超声扫描操作。系统由Kinect传感器、机器人和超声探头组成。采用Kinect实时获取超声探头的RGB图像和深度图像,并计算探头当前位姿,结合坐标系配准结果,得到机器人的位姿信息,再根据术前的机器人轨迹规划,引导机器人的超声扫描路径。开展腿部模型实验验证本系统的可行性,通过对Kinect传感器的相机标定实验,计算得到了RGB相机和深度相机的内外参数,通过对探头上标识物的定位,进而计算出探头当前位姿,结合Kinect与机器人坐标系的配准结果,得到了两者的转换矩阵,并对机器人的位置给出指令,引导机械臂夹持探头到达指定扫描位置。在机器人夹持超声探头扫描过程中,实时计算探头与腿部之间的距离,以保证所采集超声图像的质量及扫描操作的安全性。实验结果表明,在Kinect视觉系统的导航引导下,机器人可以夹持超声探头实现自主超声扫描,以减少超声医师的扫描时间,降低医师的劳动强度。 相似文献
8.
9.
10.
针对工业上常见的弱纹理、散乱堆叠的物体的检测和位姿估计问题,提出了一种基于实例分割网络与迭代优化方法的工件识别抓取系统.该系统包括图像获取、目标检测和位姿估计3个模块.图像获取模块中,设计了一种对偶RGB-D相机结构,通过融合3张深度图像来获得更高质量的深度数据;目标检测模块对实例分割网络Mask R-CNN(region-based convolutional neural network)进行了改进,同时以彩色图像和包含3维信息的HHA(horizontal disparity,height above ground,angle with gravity)特征作为输入,并在其内部增加了STN(空间变换网络)模块,提升对弱纹理物体的分割性能,结合点云信息分割目标点云;在目标检测模块的基础上,位姿估计模块利用改进的4PCS(4-points congruent set)算法和ICP(迭代最近点)算法将分割出的点云和目标模型的点云进行匹配和位姿精修,得到最终位姿估计的结果,机器人根据此结果完成抓取动作.在自采工件数据集上和实际搭建的分拣系统上进行实验,结果表明,该抓取系统能够对不同形状、弱纹理、散乱堆叠的物体实现快速的目标识别和位姿估计,位置误差可达1 mm,角度误差可达1°,其性能可满足实际应用的要求. 相似文献
11.
A new visual measurement method is proposed to estimate three-dimensional (3D) position of the object on the floor based on a single camera. The camera fixed on a robot is in an inclined position with respect to the floor. A measurement model with the camera’s extrinsic parameters such as the height and pitch angle is described. Single image of a chessboard pattern placed on the floor is enough to calibrate the camera’s extrinsic parameters after the camera’s intrinsic parameters are calibrated. Then the position of object on the floor can be computed with the measurement model. Furthermore, the height of object can be calculated with the paired-points in the vertical line sharing the same position on the floor. Compared to the conventional method used to estimate the positions on the plane, this method can obtain the 3D positions. The indoor experiment testifies the accuracy and validity of the proposed method. 相似文献
12.
13.
This paper considers the camera‐space position and orientation regulation problem for the camera‐in‐hand problem via visual serving in the presence of parametric uncertainty associated with the robot dynamics and the camera system. Specifically, an adaptive robot controller is developed that forces the end‐effector of a robot manipulator to move such that the position and orientation of an object are regulated to a desired position and orientation in the camera‐space, despite parametric uncertainty throughout the entire robot‐camera system. An extension is also provided that illustrates how slight modifications can be made to the camera‐in‐hand control law to achieve adaptive position and orientation tracking of the end‐effector in the camera‐space for a fixed‐camera configuration. Simulation results are provided to illustrate the performance of the adaptive, camera‐in‐hand controller. © 2005 Wiley Periodicals, Inc. 相似文献
14.
Robust exponential stabilization of nonholonomic wheeled mobile robots with unknown visual parameters 总被引:2,自引:1,他引:1
The visual servoing stabilization of nonholonomic mobile robot with unknown camera parameters is investigated.A new kind of uncertain chained model of nonholonomic kinemetic system is obtained based on the visual feedback and the standard chained form of type (1,2) mobile robot.Then,a novel time-varying feedback controller is proposed for exponentially stabilizing the position and orientation of the robot using visual feedback and switching strategy when the camera parameters are not known.The exponential stability of the closed-loop system is rigorously proven.Simulation results demonstrate the effectiveness of the method proposed in this paper. 相似文献
15.
《Advanced Robotics》2013,27(6):737-762
Latest advances in hardware technology and state-of-the-art of mobile robots and artificial intelligence research can be employed to develop autonomous and distributed monitoring systems. A mobile service robot requires the perception of its present position to co-exist with humans and support humans effectively in populated environments. To realize this, a robot needs to keep track of relevant changes in the environment. This paper proposes localization of a mobile robot using images recognized by distributed intelligent networked devices in intelligent space (ISpace) in order to achieve these goals. This scheme combines data from the observed position, using dead-reckoning sensors, and the estimated position, using images of moving objects, such as a walking human captured by a camera system, to determine the location of a mobile robot. The moving object is assumed to be a point-object and projected onto an image plane to form a geometrical constraint equation that provides position data of the object based on the kinematics of the ISpace. Using the a priori known path of a moving object and a perspective camera model, the geometric constraint equations that represent the relation between image frame coordinates for a moving object and the estimated robot's position are derived. The proposed method utilizes the error between the observed and estimated image coordinates to localize the mobile robot, and the Kalman filtering scheme is used for the estimation of the mobile robot location. The proposed approach is applied for a mobile robot in ISpace to show the reduction of uncertainty in determining the location of a mobile robot, and its performance is verified by computer simulation and experiment. 相似文献
16.
This paper presents a 3D noncontacting sensor system designed to measure the position and orientation of a robot end effector. This measurement system includes two parts: a tridimensional object including four spheres placed along the axes of a tetrahedron and a set of three orthogonally pointed cameras. The purpose is to design a measurement system characterized by easy relationships in order to satisfy real-time constraints. The system has been used in two experiments: first, to calibrate a parallel robot and validate the geometrical control performance, then as an exteroceptive sensor in an assembly task. The system computes position and orientation of the tetrahedron in 100 ms time. The position and orientation accuracy are, respectively, 0.6 mm and 0.2 deg in a workspace, being a cube with 0.3 m sides. 相似文献
17.
采用工业相机、工业投影机、普通摄像头、计算机和机械臂开发了一套具有三维立体视觉的机械臂智能抓取分类系统。该系统采用自编软件实现了对工业相机、工业投影机的自动控制和同步,通过前期研究提出的双波长条纹投影三维形貌测量法获取了物体的高度信息,结合opencv技术和普通摄像头获取的物体二维平行面信息,实现了物体的自动识别和分类;利用串口通信协议,将上述处理后的数据传送至机械臂,系统进行几何姿态解算,实现了智能抓取,并能根据抓手上压力反馈自动调节抓手张合程度,实现自适应抓取。经实验证明该系统能通过自带的快速三维形貌获取装置实现准确、快速的抓取工作范围内的任意形状的物体并实现智能分类。 相似文献
18.
The problem addressed is feedback from noncontact sensing for guiding robots during docking and gripping. The sensor used is a “range camera” onboard a mobile robot (MRb). To specify the docking task completely both the posture (position/orientation) and the required tolerances must be given. These tolerances are then used in the feedback control loop during docking. The algorithms are divided into three parts: the extraction of posture parameters from the “range camera,” dynamic filtering for finding association gates and protecting the system against spuriousness in the measurements, and finally a feedback controller. The feedback controller is separated into geometric control and tolerance control. The geometric control uses a range varying LQG-designed feedback control law to generate the trajectories toward the object. The tolerance control adjusts the approach velocity so that the robot is given a sufficient number of observations and control cycles to meet the required tolerances. Thus, during the approach there is a conditional re-planning of the trajectory. For simplicity, only three kinematic state variables (x, y, θ) are used for the MRb. Gripping using an industrial robot (IRb) is an equivalent problem. Successful experiments were made with range resolution varying more than a factor of 50. Thus, the resolution volume in the (x, y, θ)-space varied by several orders of magnitude during the tests. The final errors in range and orientation are essentially limited by the resolution in the “range camera.” A persistent conclusion from the experiments is the importance of correct association between the range measurements and the corresponding parts of the object. © 1997 John Wiley & Sons, Inc. 相似文献