首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
杜姗姗  周祥 《计算机应用》2015,35(9):2678-2681
工具标定就是确定工具坐标系相对于机器人末端坐标系的变换矩阵,但传统的解决方案是通过人工示教点约束的方法,为此提出一种基于视觉相机空间的自动工具标定方法。在末端工具上增加特征点如圆环标志,利用相机建立机器人三维空间与相机二维空间之间的关系,通过自动的三维空间视觉定位,实现对圆环标志的中心点的点约束,视觉定位不需要相机的标定等繁琐过程。基于机器人的正运动学和相机空间点约束完成工具中心点(TCP)求解。重复实验的标定误差小于0.05 mm,实验的绝对定位误差小于0.1 mm,验证了基于相机空间定位的工具标定具有较高的可重复性以及可靠性。  相似文献   

2.
We describe configuration space visualization methods for mechanical design. The research challenge is to relate the configuration space geometry to the mechanical function of the parts. Our research addresses the fundamental design task of contact analysis. Contacts are the physical primitives that make mechanical systems out of collections of parts. Systems perform functions by transforming motions via part contacts. The shapes of the interacting parts impose constraints on their motions that largely determine the system function. Contact analysis involves deriving and analyzing these constraints. Designers use contact analysis to ensure correct function and to optimize performance. We illustrate contact analysis on the film advance of a movie camera  相似文献   

3.
Many visualization applications benefit from displaying content on real-world objects rather than on a traditional display (e.g., a monitor). This type of visualization display is achieved by projecting precisely controlled illumination from multiple projectors onto the real-world colored objects. For such a task, the placement of the projectors is critical in assuring that the desired visualization is possible. Using ad hoc projector placement may cause some appearances to suffer from color shifting due to insufficient projector light radiance being exposed onto the physical surface. This leads to an incorrect appearance and ultimately to a false and potentially misleading visualization. In this paper, we present a framework to discover the optimal position and orientation of the projectors for such projection-based visualization displays. An optimal projector placement should be able to achieve the desired visualization with minimal projector light radiance. When determining optimal projector placement, object visibility, surface reflectance properties, and projector-surface distance and orientation need to be considered. We first formalize a theory for appearance editing image formation and construct a constrained linear system of equations that express when a desired novel appearance or visualization is possible given a geometric and surface reflectance model of the physical surface. Then, we show how to apply this constrained system in an adaptive search to efficiently discover the optimal projector placement which achieves the desired appearance. Constraints can be imposed on the maximum radiance allowed by the projectors and the projectors' placement to support specific goals of various visualization applications. We perform several real-world and simulated appearance edits and visualizations to demonstrate the improvement obtained by our discovered projector placement over ad hoc projector placement.  相似文献   

4.
针对传统多目标视觉定位技术定位误差大这一问题,基于VI-SLAM的四旋翼无人机提出了一种新的多目标视觉定位技术,阐述了定位技术原理,在进行定位时,导航定位系统、航空姿态测量系统、机载光电测量平台共同工作,通过多目标相机标定、锁定目标背景差分确定目标在摄像机坐标系的位置,将摄像机坐标系转换成载机机体坐标系,再将载机机体坐标系转换成大地坐标系,从而实现定位,引入北斗卫星导航系统和递归最小二乘算法降低定位误差。对比实验结果表明,相较于传统定位技术,基于VI-SLAM的四旋翼无人机的多目标视觉定位技术定位误差更小,应用性更广。  相似文献   

5.
提出一种单目视觉人工路标辅助惯性导航系统(Inertial Navigation System,INS)的定位方法。首先设计人工路标,并用相机拍摄各预先设置的人工路标,记录拍摄每个路标时相机位置和姿态,建立视觉路标库。在利用惯性导航系统定位过程中,对单目相机采集到的图像进行路标提取、与路标库中相应路标进行匹配,估计当前相机位置和姿态,然后利用卡尔曼滤波将视觉匹配估计的位置信息和INS有效地融合。实验结果表明:传统航位推算方法的平均误差为0.715 m,本文组合导航方法的平均误差为0.154 m,该方法有效地提高了惯性导航定位的精度。  相似文献   

6.
This paper presents a novel approach for image‐based visual servoing (IBVS) of a robotic system by considering the constraints in the case when the camera intrinsic and extrinsic parameters are uncalibrated and the position parameters of the features in 3‐D space are unknown. Based on the model predictive control method, the robotic system's input and output constraints, such as visibility constraints and actuators limitations, can be explicitly taken into account. Most of the constrained IBVS controllers use the traditional image Jacobian matrix, the proposed IBVS scheme is developed by using the depth‐independent interaction matrix. The unknown parameters can appear linearly in the prediction model and they can be estimated by the identification algorithm effectively. In addition, the model predictive control determines the optimal control input and updates the estimated parameters together with the prediction model. The proposed approach can simultaneously handle system constraints, unknown camera parameters and depth parameters. Both the visual positioning and tracking tasks can be achieved desired performances. Simulation results based on a 2‐DOF planar robot manipulator for both the eye‐in‐hand and eye‐to‐hand camera configurations are used to demonstrate the effectiveness of the proposed method.  相似文献   

7.
目的 视觉定位旨在利用易于获取的RGB图像对运动物体进行目标定位及姿态估计。室内场景中普遍存在的物体遮挡、弱纹理区域等干扰极易造成目标关键点的错误估计,严重影响了视觉定位的精度。针对这一问题,本文提出一种主被动融合的室内定位系统,结合固定视角和移动视角的方案优势,实现室内场景中运动目标的精准定位。方法 提出一种基于平面先验的物体位姿估计方法,在关键点检测的单目定位框架基础上,使用平面约束进行3自由度姿态优化,提升固定视角下室内平面中运动目标的定位稳定性。基于无损卡尔曼滤波算法设计了一套数据融合定位系统,将从固定视角得到的被动式定位结果与从移动视角得到的主动式定位结果进行融合,提升了运动目标的位姿估计结果的可靠性。结果 本文提出的主被动融合室内视觉定位系统在iGibson仿真数据集上的平均定位精度为2~3 cm,定位误差在10 cm内的准确率为99%;在真实场景中平均定位精度为3~4 cm,定位误差在10 cm内的准确率在90%以上,实现了cm级的定位精度。结论 提出的室内视觉定位系统融合了被动式和主动式定位方法的优势,能够以较低设备成本实现室内场景中高精度的目标定位结果,并在遮挡、目标...  相似文献   

8.
Tight position tolerance is required for fastener holes in wing manufacturing. Automated drilling system with high positioning accuracy is the key to achieve the requirement. The paper seeks to determine allowable values of variation sources and guarantee the hole position tolerance. The process of reference hole positioning and the compensation of drilling positions are firstly explored and formalized for an automated drilling system integrated with an industrial camera. Based on this, a positioning variation model for automated drilling considering positioning error measurement and compensation is built. After that, positioning variation synthesis being imposed engineering constraints on is mathematically modeled based on the theory of mathematical statistics. In the positioning variation synthesis, imperfect camera installation, nonideal measurement conditions, equipment positioning error, etc. are included. The positioning variation model and involving synthesis strategy have been used to develop an automated drilling system for wing assembly. Experiments conducted on the developed drilling system show that the fastener holes’ desired position tolerance 0.3 mm will not be exceeded, which is a necessary condition of the satisfactory drilling quality of the aircraft wing.  相似文献   

9.
刘虹  王文祥  李维诗 《计算机应用》2017,37(7):2057-2061
针对传统三维扫描测量机器人依赖于机器人的定位精度从而难以实现高精度测量的问题,提出了一种三维扫描测头精确跟踪定位的摄影测量方法。首先,搭建由多个工业摄像机构成的测头跟踪系统,并在机器人的扫描测头上粘贴编码标志点;然后,对摄像机进行高精度标定,求解出摄像机内外参数;其次,多摄像机同步采样,对图像中的标志点依据编码原理进行匹配,并求出投影矩阵;最终,求解出编码标志点在空间中的三维坐标,实现三维扫描测头的跟踪定位。实验结果表明,标志点定位在距离上的平均误差为0.293 mm,在角度上的平均误差为0.136°,算法精度在合理范围之内。采用该摄影测量方法可以提高扫描测头的定位精度,从而实现高精度测量。  相似文献   

10.
《Advanced Robotics》2013,27(4):463-480
This paper addresses the problem of positioning a robot camera with respect to a fixed object in space by means of visual information. The ultimate goal of positioning is to achieve and/or to maintain a given spatial configuration (position and orientation) with respect to the objects in the environment so as to execute at best the task at hand. Positioning involves the control of 6 d.o.f. in space, which are conveniently referred to as the parameters of the transformation between a camera-centered frame and an object-centered frame. In this paper, we will address the positioning problem referring to these d.o.f.'s, regardless of the specific robot configuration used to move the camera (e.g. eye-in-hand setup, navigation platform with a robot head mounted on it, etc.). The domain of application ranges from navigation tasks, (e.g. localization, docking, steering by means of natural landmarks), grasping and manipulation tasks, and autonomous/intelligent tasks based on active visual behaviors such as reading a book or reaching and commanding a control panel. The solution proposed in this work is to exploit the changes in shape of contours in order to plan and control the positioning process. In order to simplify and speed up the calculations, an affine camera model is used to describe the changes of shape of the contours in the image plane and an affine visual servoing (AVS) approach is derived. The choice of using two-dimensional (2D) features for control greatly enhances the robustness of the positioning process, in that robot kinematics and camera modeling errors are reduced. Among the possible 2D features, visual contours enable us to achieve robust visual estimates while keeping the dimensionality of the control equations low; the same would not be possible using different features such as points or lines. Finally, a feedforward control strategy complements the feedback loop, thereby enhancing the speed and the overall performance of the algorithm. Although a stability analysis of the control scheme has not been performed yet, good simulation results with stable behavior, provided that proper tuning of control parameters and gains has been done, suggest that the approach might be successfully applied in real world cases.  相似文献   

11.
This paper proposes a novel multi-modal three-dimensional (3D) laser scanning system that combines high-accuracy 3D laser imaging, very high-resolution perspective color projection, and on-site geometric calibration of the intrinsic and extrinsic parameters. Motion compensation directly from the range measurements using ICP and a 6-DOF self-built model tracking is also used to eliminate the need for stable mechanical structures and external positioning sensors. We show that scanner performances, modeling, and visualization are intimately linked and must be considered as an integral part of the modeling chain. This is particularly important in the field of heritage where the acquisition must adapt to the environment. Equations and charts are presented to compute the optimum color camera and laser scanner configuration for a given 3D modeling application in terms of camera settings such as optimum lens aperture, focal length, optimum range, and total range depth. These equations are general and can be used for most 3D acquisition systems including time-of-flight laser scanners. Experimental results are presented to demonstrate the validity of the approach.  相似文献   

12.
In this article, we present a camera control method in which the selection of an optimal camera position and the modification of camera configurations are accomplished according to changes in the surroundings. For the autonomous selection and modification of camera configurations during tasks, we consider the camera's visibility and the manipulator's manipulability. The visibility constraint guarantees that the whole of a target object can be “viewed” with no occlusions by the surroundings, and the manipulability constraint guarantees avoidance of the singular position of the manipulator and rapid modification of the camera position. By considering visibility and manipulability constraints simultaneously, we determine the optimal camera position and modify the camera configuration such that visual information for the target object can be obtained continuously during tasks. The results of simulations and experiments show that the active camera system with an eye‐in‐hand configuration can modify its configuration autonomously according to the motion of the surroundings by applying the proposed camera control method. © 2002 Wiley Periodicals, Inc.  相似文献   

13.
钟宇  张静  张华  肖贤鹏 《计算机工程》2022,48(3):100-106
智能协作机器人依赖视觉系统感知未知环境中的动态工作空间定位目标,实现机械臂对目标对象的自主抓取回收作业。RGB-D相机可采集场景中的彩色图和深度图,获取视野内任意目标三维点云,辅助智能协作机器人感知周围环境。为获取抓取机器人与RGB-D相机坐标系之间的转换关系,提出基于yolov3目标检测神经网络的机器人手眼标定方法。将3D打印球作为标靶球夹持在机械手末端,使用改进的yolov3目标检测神经网络实时定位标定球的球心,计算机械手末端中心在相机坐标系下的3D位置,同时运用奇异值分解方法求解机器人与相机坐标系转换矩阵的最小二乘解。在6自由度UR5机械臂和Intel RealSense D415深度相机上的实验结果表明,该标定方法无需辅助设备,转换后的空间点位置误差在2 mm以内,能较好满足一般视觉伺服智能机器人的抓取作业要求。  相似文献   

14.
王年  唐俊  韦穗  范益政  梁栋 《机器人》2006,28(2):136-143
给出了平移运动的一维物体所在平面的虚圆点图像及其对摄像机内参数的约束,和约束方程的数值求解方法,从而获得摄像机的内参数. 进一步通过恢复空间点在摄像机坐标系中的坐标,求解出双目摄像机之间的方位,即摄像机的外参数.对于一维物体的一般刚体运动,给出了把它转化为平移运动的方法.模拟实验和真实图像实验结果表明该方法具有较高的求解精度,同时也有一定的应用价值.  相似文献   

15.
The problem of evaluating worst-case camera positioning error induced by unknown-but-bounded (UBB) image noise for a given object-camera configuration is considered. Specifically, it is shown that upper bounds to the rotation and translation worst-case error for a certain image noise intensity can be obtained through convex optimizations. These upper bounds, contrary to lower bounds provided by standard optimization tools, allow one to design robust visual servo systems.  相似文献   

16.
We present a method for detecting motion regions in video sequences observed by a moving camera in the presence of a strong parallax due to static 3D structures. The proposed method classifies each image pixel into planar background, parallax, or motion regions by sequentially applying 2D planar homographies, the epipolar constraint, and a novel geometric constraint called the "structure consistency constraint." The structure consistency constraint, being the main contribution of this paper, is derived from the relative camera poses in three consecutive frames and is implemented within the "Plane + Parallax" framework. Unlike previous planar-parallax constraints proposed in the literature, the structure consistency constraint does not require the reference plane to be constant across multiple views. It directly measures the inconsistency between the projective structures from the same point under camera motion and reference plane change. The structure consistency constraint is capable of detecting moving objects followed by a moving camera in the same direction, a so-called degenerate configuration where the epipolar constraint fails. We demonstrate the effectiveness and robustness of our method with experimental results of real-world video sequences.  相似文献   

17.
Automatic 3D animation generation techniques are becoming increasingly popular in different areas related to computer graphics such as video games and animated movies. They help automate the filmmaking process even by non professionals without or with minimal intervention of animators and computer graphics programmers. Based on specified cinematographic principles and filming rules, they plan the sequence of virtual cameras that the best render a 3D scene. In this paper, we present an approach for automatic movie generation using linear temporal logic to express these filming and cinematography rules. We consider the filming of a 3D scene as a sequence of shots satisfying given filming rules, conveying constraints on the desirable configuration (position, orientation, and zoom) of virtual cameras. The selection of camera configurations at different points in time is understood as a camera plan, which is computed using a temporal-logic based planning system (TLPlan) to obtain a 3D movie. The camera planner is used within an automated planning application for generating 3D tasks demonstrations involving a teleoperated robot arm on the the International Space Station (ISS). A typical task demonstration involves moving the robot arm from one configuration to another. The main challenge is to automatically plan the configurations of virtual cameras to film the arm in a manner that conveys the best awareness of the robot trajectory to the user. The robot trajectory is generated using a path-planner. The camera planner is then invoked to find a sequence of configurations of virtual cameras to film the trajectory.  相似文献   

18.
This article addresses the visual servoing of a rigid robotic manipulator equipped with a binocular vision system in eye-to-hand configuration. The control goal is to move the robot end-effector to a visually determined target position precisely without knowing the precise camera model. Many vision-based robotic positioning systems have been successfully implemented and validated by supporting experimental results. Nevertheless, this research aims at providing stability analysis for a class of robotic set-point control systems employing image-based feedback laws. Specifically, by exploring epipolar geometry of the binocular vision system, a binocular visual constraint is found to assist in establishing stability property of the feedback system. Any three-degree-of-freedom positioning task, if satisfying appropriate conditions with the image-based encoding approach, can be encoded in such a way that the encoded error, when driven to zero, implies that the original task has been accomplished with precision. The corresponding image-based control law is proposed to drive the encoded error to zero. The overall closed-loop system is exponentially stable provided that the binocular model imprecision is small.  相似文献   

19.
This paper presents an extended version of Navidget. Navidget is a new interaction technique for camera positioning in 3D environments. This technique derives from the point-of-interest (POI) approaches where the endpoint of a trajectory is selected for smooth camera motions. Unlike the existing POI techniques, Navidget does not attempt to automatically estimate where and how the user wants to move. Instead, it provides good feedback and control for fast and easy interactive camera positioning. Navidget can also be useful for distant inspection when used with a preview window. This new 3D user interface is totally based on 2D inputs. As a result, it is appropriate for a wide variety of visualization systems, from small handheld devices to large interactive displays. A user study on TabletPC shows that the usability of Navidget is very good for both expert and novice users. This new technique is more appropriate than the conventional 3D viewer interfaces in numerous 3D camera positioning tasks. Apart from these tasks, the Navidget approach can be useful for further purposes such as collaborative work and animation.  相似文献   

20.
张涛  马磊  梅玲玉 《计算机应用》2017,37(9):2491-2495
针对轮式仓储物流机器人的自主定位问题,提出了一种基于视觉信标和里程计数据融合的室内定位方法。首先,通过建立相机模型巧妙地解算信标与相机之间的旋转和平移关系,获取定位信息;然后,针对信标定位方式更新频率低、定位信息不连续等问题,在分析陀螺仪和里程计角度误差特点的基础上,提出一种基于方差加权角度融合的方法实现角度融合;最后,设计里程计误差模型,使用Kalman滤波器融合里程计和视觉定位信息弥补单个传感器定位缺陷。在差分轮式移动机器人上实现算法并进行实验,实验结果表明上述方法在提高位姿更新率的同时降低了角度误差和位置误差,有效地提高了定位精度,其重复位置误差小于4 cm,航向角误差小于2°。同时该方法实现简单,具有很强的可操作性和实用价值。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号