首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
于涵  李一染  毕书博  刘迎圆  安康 《计算机工程》2021,47(1):298-304,311
在传统基于固定视觉的排爆机器人抓取系统中,相机视觉易被遮挡且不能保证拍摄清晰度。基于随动视觉技术,提出一种将深度相机置于机械手末端并随机械手运动的排爆机器人自主抓取系统。利用深度相机计算目标物体的三维坐标,采用坐标转换方法将目标物体的位置坐标信息实时转换至机器人全局坐标系,并研究相机坐标系、机器人全局坐标系与末端执行器手爪工具坐标系三者的动态映射关系,实现排爆机器人的自主抓取。实验结果表明,与传统固定视觉方法相比,随动视觉方法可在误差2cm内,使得机器人机械手爪准确到达目标物体所在位置,且当机器人距离目标物体100cm~150cm时,抓取效果最佳。  相似文献   

2.
针对传统机械臂局限于按既定流程对固定位姿的特定物体进行机械化抓取,设计了一种基于机器视觉的非特定物体的智能抓取系统;系统通过特定的卷积神经网络对深度相机采集到的图像进行目标定位,并在图像上预测出一个该目标的可靠抓取位置,系统进一步将抓取位置信息反馈给机械臂,机械臂根据该信息完成对目标物体的抓取操作;系统基于机器人操作系统,硬件之间通过机器人操作系统的话题机制传递必要信息;最终经多次实验结果表明,通过改进的快速搜索随机树运动规划算法,桌面型机械臂能够根据神经网络模型反馈的的标记位置对不同位姿的非特定物体进行实时有效的抓取,在一定程度上提高了机械臂的自主能力,弥补了传统机械臂的不足.  相似文献   

3.
We report an autonomous surveillance system with multiple pan-tilt-zoom (PTZ) cameras assisted by a fixed wide-angle camera. The wide-angle camera provides large but low resolution coverage and detects and tracks all moving objects in the scene. Based on the output of the wide-angle camera, the system generates spatiotemporal observation requests for each moving object, which are candidates for close-up views using PTZ cameras. Due to the fact that there are usually much more objects than the number of PTZ cameras, the system first assigns a subset of the requests/objects to each PTZ camera. The PTZ cameras then select the parameter settings that best satisfy the assigned competing requests to provide high resolution views of the moving objects. We propose an approximation algorithm to solve the request assignment and the camera parameter selection problems in real time. The effectiveness of the proposed system is validated in both simulation and physical experiment. In comparison with an existing work using simulation, it shows that in heavy traffic scenarios, our algorithm increases the number of observed objects by over 210%.  相似文献   

4.
A tracking object must present a proper field of view (FOV) in a multiple active camera surveillance system; its clarity can facilitate smooth processing by the surveillance system before further processing, such as face recognition. However, when pan–tilt–zoom (PTZ) cameras are used, the tracking object can be brought into the FOV by adjusting its intrinsic parameters; consequently, selection of the best-performing camera is critical. Performance is determined by the relative positions of the camera and the tracking objects, image quality, lighting and how much of the front side of the object faces the camera. In a multi-camera surveillance system, both camera hand-off and camera assignment play an important role in automated and persistent tracking, which are typical surveillance requirements. This study investigates the use of automatic methods for tracking an object across cameras in a surveillance network using PTZ cameras. An automatic, efficient continuous tracking scheme is developed. The goal is to determine the decision criteria for hand-off using Sight Quality Indication (SQI) (which includes information on the position of the object and the proportion of the front of object faces the camera), and to perform the camera handoff task in a manner that optimizes the vision effect associated with monitoring. Experimental results reveal that the proposed algorithm can be efficiently executed, and the handoff method for feasible and continuously tracking active objects under real-time surveillance.  相似文献   

5.
The purpose of this study is to control the position of an underactuated underwater vehicle manipulator system (U‐UVMS). It is possible to control the end‐effector using a regular 6‐DOF manipulator despite the undesired displacements of the underactuated vehicle within a certain range. However, in this study an 8‐DOF redundant manipulator is used in order to increase the positioning accuracy of the end‐effector. The redundancy is resolved according to the criterion of minimal vehicle and joint motions. The underactuated underwater vehicle redundant manipulator system is modeled including the hydrodynamic forces for the manipulator in addition to those for the autonomous underwater vehicle (AUV). The shadowing effects of the bodies on each other are also taken into account when computing the hydrodynamic forces. The Newton‐Euler formulation is used to derive the system equations of motion including the thruster dynamics. In order to establish the end‐effector trajectory tracking control of the system, an inverse dynamics control law is formulated. The effectiveness of the control law even in the presence of parameter uncertainties and disturbing ocean currents is illustrated by simulations.  相似文献   

6.
钟宇  张静  张华  肖贤鹏 《计算机工程》2022,48(3):100-106
智能协作机器人依赖视觉系统感知未知环境中的动态工作空间定位目标,实现机械臂对目标对象的自主抓取回收作业。RGB-D相机可采集场景中的彩色图和深度图,获取视野内任意目标三维点云,辅助智能协作机器人感知周围环境。为获取抓取机器人与RGB-D相机坐标系之间的转换关系,提出基于yolov3目标检测神经网络的机器人手眼标定方法。将3D打印球作为标靶球夹持在机械手末端,使用改进的yolov3目标检测神经网络实时定位标定球的球心,计算机械手末端中心在相机坐标系下的3D位置,同时运用奇异值分解方法求解机器人与相机坐标系转换矩阵的最小二乘解。在6自由度UR5机械臂和Intel RealSense D415深度相机上的实验结果表明,该标定方法无需辅助设备,转换后的空间点位置误差在2 mm以内,能较好满足一般视觉伺服智能机器人的抓取作业要求。  相似文献   

7.
采用工业相机、工业投影机、普通摄像头、计算机和机械臂开发了一套具有三维立体视觉的机械臂智能抓取分类系统。该系统采用自编软件实现了对工业相机、工业投影机的自动控制和同步,通过前期研究提出的双波长条纹投影三维形貌测量法获取了物体的高度信息,结合opencv技术和普通摄像头获取的物体二维平行面信息,实现了物体的自动识别和分类;利用串口通信协议,将上述处理后的数据传送至机械臂,系统进行几何姿态解算,实现了智能抓取,并能根据抓手上压力反馈自动调节抓手张合程度,实现自适应抓取。经实验证明该系统能通过自带的快速三维形貌获取装置实现准确、快速的抓取工作范围内的任意形状的物体并实现智能分类。  相似文献   

8.
This paper discusses an extended camera model for ray tracing. As an alternative to standard camera modules an abstract camera machine is presented. It represents a framework of extended cameras which is based on standard mapping functions. They are integrated within the abstract camera machine to complete the camera function, which generates rays out of image locations (pixels). Modelling the camera function as an abstract camera machine in combination with standard mapping functions opens a wide field of applications and the specification of extended cameras is greatly simplified. By using extended cameras it is easily possible to produce special and artistic effects, e.g. a local non-linear zoom of especially interesting regions while still retaining an overview of the whole scene. Overviews of given scenes can be modelled and several views of the same object can be integrated into one picture. Several examples of extended cameras designed with the abstract camera machine are discussed and colour plates made with these cameras are presented.  相似文献   

9.
为提升机械臂设备的准确分拣能力,设计基于RBF-BP算法的机械臂多自由度分拣控制系统。利用RBF-BP前馈神经网络,规划机械臂设备的运动轨迹路线,实现基于RBF-BP算法的机械臂运动轨迹建模。确定工业相机在总功能框架中所处连接位置,根据机械臂选型情况,确定可编程逻辑控制器、变频控制器对于所选机械臂元件的调节能力,完成对系统功能模块的开发。借助传输信道,将Sorting分拣指令、ToolControl控制指令反馈回核心控制主机,建立完整的指令执行回路,再联合相关硬件设备结构,实现基于RBF-BP算法的机械臂多自由度分拣控制系统设计。实验结果表明,在抓取能力相同的情况下,应用RBF-BP算法控制系统机械臂成功分拣的工件数量为19件,机械臂抓取成功率为95%,说明所设计系统满足提升机械臂准确分拣能力的设计初衷。  相似文献   

10.
苏杰  张云洲  房立金  李奇  王帅 《机器人》2020,42(2):129-138
针对机器人在非结构化环境下面临的未知物体难以快速稳定抓取的问题,提出一种基于多重几何约束的未知物体抓取位姿估计方法.通过深度相机获取场景的几何点云信息,对点云进行预处理得到目标物体,利用简化的夹持器几何形状约束生成抓取位姿样本.然后,利用简化的力封闭约束对样本进行快速粗筛选.对抓取位姿的抓取几何轮廓进行力平衡约束分析,将稳定的位姿传送至机器人执行抓取.采用深度相机与6自由度机械臂组成实验平台,对不同姿态形状的物体进行抓取实验.实验结果表明,本文方法能够有效应对物体种类繁多、缺乏3维模型的情况,在单目标和多目标场景均具有良好的适用性.  相似文献   

11.
We present a surveillance system, comprising wide field-of-view (FOV) passive cameras and pan/tilt/zoom (PTZ) active cameras, which automatically captures high-resolution videos of pedestrians as they move through a designated area. A wide-FOV static camera can track multiple pedestrians, while any PTZ active camera can capture high-quality videos of one pedestrian at a time. We formulate the multi-camera control strategy as an online scheduling problem and propose a solution that combines the information gathered by the wide-FOV cameras with weighted round-robin scheduling to guide the available PTZ cameras, such that each pedestrian is observed by at least one PTZ camera while in the designated area. A centerpiece of our work is the development and testing of experimental surveillance systems within a visually and behaviorally realistic virtual environment simulator. The simulator is valuable as our research would be more or less infeasible in the real world given the impediments to deploying and experimenting with appropriately complex camera sensor networks in large public spaces. In particular, we demonstrate our surveillance system in a virtual train station environment populated by autonomous, lifelike virtual pedestrians, wherein easily reconfigurable virtual cameras generate synthetic video feeds. The video streams emulate those generated by real surveillance cameras monitoring richly populated public spaces.A preliminary version of this paper appeared as [1].  相似文献   

12.
Pan–tilt–zoom (PTZ) cameras are well suited for object identification and recognition in far-field scenes. However, the effective use of PTZ cameras is complicated by the fact that a continuous online camera calibration is needed and the absolute pan, tilt and zoom values provided by the camera actuators cannot be used because they are not synchronized with the video stream. So, accurate calibration must be directly extracted from the visual content of the frames. Moreover, the large and abrupt scale changes, the scene background changes due to the camera operation and the need of camera motion compensation make target tracking with these cameras extremely challenging. In this paper, we present a solution that provides continuous online calibration of PTZ cameras which is robust to rapid camera motion, changes of the environment due to varying illumination or moving objects. The approach also scales beyond thousands of scene landmarks extracted with the SURF keypoint detector. The method directly derives the relationship between the position of a target in the ground plane and the corresponding scale and position in the image and allows real-time tracking of multiple targets with high and stable degree of accuracy even at far distances and any zoom level.  相似文献   

13.
This article proposes multiple self-organizing maps (SOMs) for control of a visuo-motor system that consists of a redundant manipulator and multiple cameras in an unstructured environment. The maps control the manipulator so that it reaches its end-effector at targets given in the camera images. The maps also make the manipulator take obstacle-free poses. Multiple cameras are introduced to avoid occlusions, and multiple SOMs are introduced to deal with multiple camera images. Some simulation results are shown.  相似文献   

14.
抓取规划和控制是机械臂抓取系统中的难点.为了有效的解决这两个问题,本文提出一种基于机器视觉和单片机相结合的机械臂抓取系统.首先利用前期视觉测量成果对目标定位,然后设计了一种软件接口将目标表面三维信息进行可视化,并通过人为经验手动选择一个良好的抓取点;再结合逆运动学求解和轨迹规划算法,利用单片机驱动舵机使机械臂末端执行器...  相似文献   

15.
王挺  王越超 《机器人》2008,30(1):1-12
介绍了一种利用人机合作技术在非结构环境引导机械手抓取静态目标的方法.分别介绍了将激光—CCD摄像机系统与操作者的经验相结合获得抓取目标位置的方法,及将虚拟现实技术与操作者的经验相结合获得抓取目标姿态的方法.继而利用基于模型的视觉引导技术,引导手臂完成抓取操作.  相似文献   

16.
针对纱筒上下料对人力过度依赖的问题,在研究仿生学手指基础上,构建面向智能制造的纱筒抓取仿生机械手。首先,采取模块化设计思想,设计适合纱筒抓取的仿生机械手结构模型,并选择绳索传动作为驱动方式;其次,详细分析仿生机械手的组成及其抓取原理,运用D-H坐标法,实现机械手指坐标系和手指基座坐标系之间变换,推导机械手末端位置方程,得到最优抓取姿态;最后,利用有限元软件,建立三维欠驱动仿生机械手模型并对其进行虚拟装配与运动仿真分析,以验证机械手抓取纱筒的可行性和稳定性,形成机器人智能抓取仿生机械手的关键技术。  相似文献   

17.
针对海洋工程中采用的设备深海悬垂法安装过程,采用多摄像头视频运动分析方 法计算水下三维运动轨迹可用于指导海洋工程的结构安装和分析设备水下运动特征。水下视频 和图像的处理获取面临着诸多挑战,首先由于水下环境悬浮物和颗粒较多,光在水下发生了散 射,使水下图像发生了退化;其次水下视频运动分析遇到的一个主要障碍是光线的折射引起的 图像误差。由于光在水、玻璃、空气不同介质间发生折射,光路发生弯曲,陆地上的摄像机成 像模型在水中不再适用,需要提出新的水下摄像机成像模型。本文引入带光线折射的水下摄像 机成像模型,研究水下摄像机的内参数和外参数标定方法,利用固定布置的 3 个水下摄像机拍 摄的目标水下运动视频来计算水下目标的轨迹。该方法适用于水池环境下水下物体大范围运动, 可以得到较为精确的轨迹,并得到了实验验证。  相似文献   

18.
视觉监控应用中多传感器协作的人脸检测系统   总被引:2,自引:0,他引:2  
提出了一种新颖的由两个可控摄像机组成的多传感器视觉监控系统,旨在实现户外环境下的实时跟踪与特征化运动目标.特别地,该系统利用一个在多个缩放级别上可操作的移动摄像机在连续视频帧中自动获取与跟踪人脸.配合它的是一架能执行自动目标跟踪与分类的固定广域摄像机.  相似文献   

19.
This paper describes a camera position control with aerial manipulator for visual test of bridge inspection. Our developed unmanned aerial vehicle (UAV) has three‐degree‐of‐freedom (3‐DoF) manipulator on its top to execute visual or hammering test of the inspection. This paper focuses on the visual test. A camera was implemented at the end of the manipulator to acquire images of the narrow space of the bridge such as bearings, which the conventional UAV without the camera‐attached manipulators at its top cannot achieve the fine visual test. For the visual test, it is desirable that the camera is above the body with enough distance between the camera and the body. As obvious, the camera position in the inertial coordinate system is effected by the movement of the body. Therefore we implement the camera position control compensating the body movement into the UAV. As a result of an experiment, it is assessed that the proposed control reduces the position error of the camera comparing the one of the body. The mean position error of the camera is 0.039 m that is 51.4% of the one of the body. Our world‐first study enables to acquire the image of the bearing of the bridge by a camera mounted at the end effector of aerial manipulator fixed on UAV.  相似文献   

20.
Automatic 3D animation generation techniques are becoming increasingly popular in different areas related to computer graphics such as video games and animated movies. They help automate the filmmaking process even by non professionals without or with minimal intervention of animators and computer graphics programmers. Based on specified cinematographic principles and filming rules, they plan the sequence of virtual cameras that the best render a 3D scene. In this paper, we present an approach for automatic movie generation using linear temporal logic to express these filming and cinematography rules. We consider the filming of a 3D scene as a sequence of shots satisfying given filming rules, conveying constraints on the desirable configuration (position, orientation, and zoom) of virtual cameras. The selection of camera configurations at different points in time is understood as a camera plan, which is computed using a temporal-logic based planning system (TLPlan) to obtain a 3D movie. The camera planner is used within an automated planning application for generating 3D tasks demonstrations involving a teleoperated robot arm on the the International Space Station (ISS). A typical task demonstration involves moving the robot arm from one configuration to another. The main challenge is to automatically plan the configurations of virtual cameras to film the arm in a manner that conveys the best awareness of the robot trajectory to the user. The robot trajectory is generated using a path-planner. The camera planner is then invoked to find a sequence of configurations of virtual cameras to film the trajectory.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号