首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This article describes a three-dimensional artificial vision system for robotic applications using an ultrasonic sensor array. The array is placed on the robot grip so that it is possible to detect the presence of an object, to direct the robot tool towards it, and to locate the object position. It will provide visual information about the object's surface by means of superficial scanning and it permits the object shape reconstruction. The developed system uses an approximation of the ultrasonic radiation and reception beam shape for calculating the first contact points with the object's surface. On the other hand, the position of the array's sensors has been selected in order to provide the sensorial head with other useful capabilities, such as edge detection and edge tracking. Furthermore, the article shows the structure of the sensorial head for avoiding successive rebounds between the head and the object surface, and for eliminating the mechanical vibrations among sensors.  相似文献   

2.
We have considered the motion control of a space robot composed of a body and a telescopic manipulator arm. The robot is in the state of free passive flight. The vector of the number of movements and the kinetic moment of the robot relative to the center of mass are zero. The manipulator arm motion causes a corresponding motion of the robot body (change in the position of the center of mass of the body and its rotation). Unlike earlier results, we have revealed that the robot grip can be shifted from an arbitrary initial position to an arbitrary final position inside the operating area and, in addition, the required (most convenient for operations) angle between the manipulator arm and the robot body in the final position can be obtained.  相似文献   

3.
In a human–robot collaborative manufacturing application where a work object can be placed in an arbitrary position, there is a need to calibrate the actual position of the work object. This paper presents an approach for automatic work-object calibration in flexible robotic systems. The approach consists of two modules: a global positioning module based on fixed cameras mounted around robotic workspace, and a local positioning module based on the camera mounted on the robot arm. The aim of the global positioning is to detect the work object in the working area and roughly estimate its position, whereas the local positioning is to define an object frame according to the 3D position and orientation of the work object with higher accuracy. For object detection and localization, coded visual markers are utilized. For each object, several markers are used to increase the robustness and accuracy of the localization and calibration procedure. This approach can be used in robotic welding or assembly applications.  相似文献   

4.
This paper presents a simple grasp planning method for a multi-fingered hand. Its purpose is to compute a context-independent and dense set or list of grasps, instead of just a small set of grasps regarded as optimal with respect to a given criterion. By context-independent, we mean that only the robot hand and the object to grasp are considered. The environment and the position of the robot base with respect to the object are considered in a further stage. Such a dense set can be computed offline and then used to let the robot quickly choose a grasp adapted to a specific situation. This can be useful for manipulation planning of pick-and-place tasks. Another application is human–robot interaction when the human and robot have to hand over objects to each other. If human and robot have to work together with a predefined set of objects, grasp lists can be employed to allow a fast interaction.The proposed method uses a dense sampling of the possible hand approaches based on a simple but efficient shape feature. As this leads to many finger inverse kinematics tests, hierarchical data structures are employed to reduce the computation times. The data structures allow a fast determination of the points where the fingers can realize a contact with the object surface. The grasps are ranked according to a grasp quality criterion so that the robot will first parse the list from best to worse quality grasps, until it finds a grasp that is valid for a particular situation.  相似文献   

5.
Image-based effector servoing is a process of perception–action cycles for handling a robot effector under continual visual feedback. This paper applies visual servoing mechanisms not only for handling objects, but also for camera calibration and object inspection. A 6-DOF manipulator and a stereo camera head are mounted on separate platforms and are steered independently. In a first phase (calibration phase), camera features are determined like the optical axes and the fields of sharp view. In the second phase (inspection phase), the robot hand carries an object into the field of view of one camera, then approaches the object along the optical axis to the camera, rotates the object for reaching an optimal view, and finally the object shape is inspected in detail. In the third phase (assembly phase), the system localizes a board containing holes of different shapes, determines the hole which fits most appropriate to the object shape, then approaches and arranges the object appropriately. The final object insertion is based on haptic sensors, but is not treated in the paper. At present, the robot system has the competence to handle cylindrical and cuboid pegs. For handling other object categories the system can be extended with more sophisticated strategies of the inspection and/or assembly phase.  相似文献   

6.
Currently, the results of a robot calibration procedure are expressed generally in terms of the position and orientation error for a set of locations and orientations, which have been obtained from the previously identified kinematic parameters. In this work, a technique is presented to evaluate the calibration uncertainty for a robot arm calibrated using the circle point analysis method. The method developed is based on the probability distribution propagation calculation recommended by the Guide to the Expression of Uncertainty of Measurement and on the Monte Carlo method. This method makes it possible to calculate the uncertainty in the identification of each single robot parameter, and thus, to estimate the robot positioning uncertainty due to the calibration uncertainty, rather than based on a single set locations and orientations that are previously defined for a unique set of identified parameters. Additionally, this technique allows for the establishment of the best possible conditions for the data capture test, which identifies parameters and determines which of them have the least possible calibration uncertainty. This determination is based on the variables involved in the data capture process by propagating their influence up to the final robot accuracy.  相似文献   

7.
Sensors, mounted on the dexterous end of a robot, can be used for feedback control or calibration. When you mount a sensor on a robot it becomes necessary to find the pose (orientation and position) of the sensor relative to the robot. This is the sensor registration problem. Many researchers have provided closed-form solutions to the sensor registration problem; however, the published solutions apply only to sensors that can measure a complete pose (three positions and three orientations). Many sensors, however, can provide only position information; they cannot measure the orientation of an object. This article provides a closed-form solution to the sensor registration problem applicable when: (1) the sensor can provide only position information and (2) the robot can move along and rotate about straight lines. © 1994 John Wiley & Sons, Inc.  相似文献   

8.
A visuo-haptic augmented reality system is presented for object manipulation and task learning from human demonstration. The proposed system consists of a desktop augmented reality setup where users operate a haptic device for object interaction. Users of the haptic device are not co-located with the environment where real objects are present. A three degrees of freedom haptic device, providing force feedback, is adopted for object interaction by pushing, selection, translation and rotation. The system also supports physics-based animation of rigid bodies. Virtual objects are simulated in a physically plausible manner and seem to coexist with real objects in the augmented reality space. Algorithms for calibration, object recognition, registration and haptic rendering have been developed. Automatic model-based object recognition and registration are performed from 3D range data acquired by a moving laser scanner mounted on a robot arm. Several experiments have been performed to evaluate the augmented reality system in both single-user and collaborative tasks. Moreover, the potential of the system for programming robot manipulation tasks by demonstration is investigated. Experiments show that a precedence graph, encoding the sequential structure of the task, can be successfully extracted from multiple user demonstrations and that the learned task can be executed by a robot system.  相似文献   

9.
This paper describes an industrial robot calibration algorithm called the virtual closed kinematic chain method. Current robot kinematic calibration methods use measurements of position and orientation of the end effector. The accuracy of these measurements is limited by the resolution of the measuring equipment. In the proposed method, a laser pointer tool, attached to the robot's end effector, aims at a constant but unknown location on a fixed object, effectively creating a virtual 7 DOFs closed kinematic chain. As a result, small variations in position and orientation of the end effector are magnified on the distant object. Hence, the resolution of observations is improved, increasing the accuracy of joint angle measurements that are required to calibrate the robot. The method is verified using both simulation and real experiments. It is also shown in simulation that the method can be automated by a feedback system that can be implemented in real time. The accuracy of the robot after using the proposed calibration procedure is measured by aiming at an arbitrary fixed point and measuring the mean and standard deviation of the radius of spread of the projected points. The mean and standard deviation of the radius of spread were improved from 5.64 and 1.89 mm to 1.05 and 0.587 mm, respectively.  相似文献   

10.
We present an approach for controlling robotic interactions with objects, using synthetic images generated by morphing shapes. In particular, we attempt the problem of positioning an eye-in-hand robotic system with respect to objects in the workspace for grasping and manipulation. In our formulation, the grasp position (and consequently the approach trajectory of the manipulator), varies with each object. The proposed solution to the problem consists of two parts. First, based on a model-based object recognition framework, images of the objects taken at the desired grasp pose are stored in a database. The recognition and identification of the grasp position for an unknown input object (selected from the family of recognizable objects) occurs by morphing its contour to the templates in the database and using the virtual energy spent during the morph as a dissimilarity measure. In the second step, the images synthesized during the morph are used to guide the eye-in-hand system and execute the grasp. The proposed method requires minimal calibration of the system. Furthermore, it conjoins techniques from shape recognition, computer graphics, and vision-based robot control in a unified engineering amework. Potential applications range from recognition and positioning with respect to partially-occluded or deformable objects to planning robotic grasping based on human demonstration.  相似文献   

11.
《Advanced Robotics》2013,27(2):157-171
_In this paper we propose a new forthcoming research topic, the Intelligent Assisting System_ IAS. Using this system, we are approaching the identification and analysis of human manipulation skills to be used for intelligent human operator assistance. A manipulation skill database enables the IAS to perform complex manipulations at the motion control level. Through repeated interaction with the operator for unknown environment states, the manipulation skills in the database can be increased on-line. A model for manipulation skill based on the grip transformation matrix is proposed, which describes the transformation between the object trajectory and the contact conditions. The dynamic behaviour of the grip transform is regarded as the essence of the performed manipulation skill. We describe the experimental system set-up of a skill acquisition and transfer system as a first approach to the IAS. A simple example of manipulation shows the feasibility of the proposed manipulation skill model. Furthermore, this paper derives a control algorithm that realizes object task trajectories, and its feasibility is shown by simulation.  相似文献   

12.
This paper presents a novel object–object affordance learning approach that enables intelligent robots to learn the interactive functionalities of objects from human demonstrations in everyday environments. Instead of considering a single object, we model the interactive motions between paired objects in a human–object–object way. The innate interaction-affordance knowledge of the paired objects are learned from a labeled training dataset that contains a set of relative motions of the paired objects, human actions, and object labels. The learned knowledge is represented with a Bayesian Network, and the network can be used to improve the recognition reliability of both objects and human actions and to generate proper manipulation motion for a robot if a pair of objects is recognized. This paper also presents an image-based visual servoing approach that uses the learned motion features of the affordance in interaction as the control goals to control a robot to perform manipulation tasks.  相似文献   

13.
连续型机器人研究综述   总被引:2,自引:1,他引:1  
孙立宁  胡海燕  李满天 《机器人》2010,32(5):688-694
连续型机器人是一种新型的仿生机器人,采用“无脊椎”的柔性结构,不具有任何离散关节和刚性连 杆.其弯曲性能优良,对障碍物众多的非结构环境和工作空间狭小的受限工作环境适应能力强,不仅可以像传统机 器人那样在其末端安装执行器以实现抓取和夹持等动作,还可以利用其整个机器人本体实现对物体的抓取.本文对 连续型机器人的仿生原理与结构特点进行了分析,介绍了连续型机器人的研究现状,并对其潜在应用领域和未来研 究工作进行了探讨和展望.  相似文献   

14.
An algorithm is presented for using a robot system with a single camera to position in threedimensional space a slender object for insertion into a hole; for example, an electrical pin-type termination into a connector hole. The algorithm relies on a control-configured end effector to achieve the required horizontal translations and rotational motion, and it does not require camera calibration. A force sensor in each fingertip is integrated with the vision system to allow the robot to teach itself new reference points when different connectors and pins are used. Variability in the grasped orientation and position of the pin can be accommodated with the sensor system. Performance tests show that the system is feasible. More work is needed to determine more precisely the effects of lighting levels and lighting direction.  相似文献   

15.
This article presents object handling control between two-wheel robot manipulators, and a two-wheel robot and a human operator. The two-wheel robot has been built for serving humans in the indoor environment. It has two wheels to maintain balance and is able to make contact with a human operator via an object. A position-based impedance force control method is applied to maintain stable object-handling tasks. As the human operator pushes and pulls the object, the robot also reacts to maintain contact with the object by pulling and pushing against the object to regulate a specified force. Master and slave configuration of two-wheel robots is formed for handling an object, where the master robot or a human leads the slave robot equipped with a force sensor. Switching control from position to force or vice versa is presented. Experimental studies are performed to evaluate the feasibility of the object-handling task between two-wheel mobile robots, and the robot and a human operator.  相似文献   

16.
In this paper we propose a novel approach for intuitive and natural physical human–robot interaction in cooperative tasks. Through initial learning by demonstration, robot behavior naturally evolves into a cooperative task, where the human co-worker is allowed to modify both the spatial course of motion as well as the speed of execution at any stage. The main feature of the proposed adaptation scheme is that the robot adjusts its stiffness in path operational space, defined with a Frenet–Serret frame. Furthermore, the required dynamic capabilities of the robot are obtained by decoupling the robot dynamics in operational space, which is attached to the desired trajectory. Speed-scaled dynamic motion primitives are applied for the underlying task representation. The combination allows a human co-worker in a cooperative task to be less precise in parts of the task that require high precision, as the precision aspect is learned and provided by the robot. The user can also freely change the speed and/or the trajectory by simply applying force to the robot. The proposed scheme was experimentally validated on three illustrative tasks. The first task demonstrates novel two-stage learning by demonstration, where the spatial part of the trajectory is demonstrated independently from the velocity part. The second task shows how parts of the trajectory can be rapidly and significantly changed in one execution. The final experiment shows two Kuka LWR-4 robots in a bi-manual setting cooperating with a human while carrying an object.  相似文献   

17.
The sensory and motor capacities of the human hand are reviewed in the context of providing a set of performance characteristics against which prosthetic and dextrous robot hands can be evaluated. The sensors involved in processing tactile, thermal, and proprioceptive (force and movement) information are described, together with details on their spatial densities, sensitivity, and resolution. The wealth of data on the human hand's sensory capacities is not matched by an equivalent database on motor performance. Attempts at quantifying manual dexterity have met with formidable technological difficulties due to the conditions under which many highly trained manual skills are performed. Limitations in technology have affected not only the quantifying of human manual performance but also the development of prosthetic and robotic hands. Most prosthetic hands in use at present are simple grasping devices, and imparting a "natural" sense of touch to these hands remains a challenge. Several dextrous robot hands exist as research tools and even though some of these systems can outperform their human counterparts in the motor domain, they are still very limited as sensory processing systems. It is in this latter area that information from studies of human grasping and processing of object information may make the greatest contribution.  相似文献   

18.
《Advanced Robotics》2013,27(6):737-762
Latest advances in hardware technology and state-of-the-art of mobile robots and artificial intelligence research can be employed to develop autonomous and distributed monitoring systems. A mobile service robot requires the perception of its present position to co-exist with humans and support humans effectively in populated environments. To realize this, a robot needs to keep track of relevant changes in the environment. This paper proposes localization of a mobile robot using images recognized by distributed intelligent networked devices in intelligent space (ISpace) in order to achieve these goals. This scheme combines data from the observed position, using dead-reckoning sensors, and the estimated position, using images of moving objects, such as a walking human captured by a camera system, to determine the location of a mobile robot. The moving object is assumed to be a point-object and projected onto an image plane to form a geometrical constraint equation that provides position data of the object based on the kinematics of the ISpace. Using the a priori known path of a moving object and a perspective camera model, the geometric constraint equations that represent the relation between image frame coordinates for a moving object and the estimated robot's position are derived. The proposed method utilizes the error between the observed and estimated image coordinates to localize the mobile robot, and the Kalman filtering scheme is used for the estimation of the mobile robot location. The proposed approach is applied for a mobile robot in ISpace to show the reduction of uncertainty in determining the location of a mobile robot, and its performance is verified by computer simulation and experiment.  相似文献   

19.
钟宇  张静  张华  肖贤鹏 《计算机工程》2022,48(3):100-106
智能协作机器人依赖视觉系统感知未知环境中的动态工作空间定位目标,实现机械臂对目标对象的自主抓取回收作业。RGB-D相机可采集场景中的彩色图和深度图,获取视野内任意目标三维点云,辅助智能协作机器人感知周围环境。为获取抓取机器人与RGB-D相机坐标系之间的转换关系,提出基于yolov3目标检测神经网络的机器人手眼标定方法。将3D打印球作为标靶球夹持在机械手末端,使用改进的yolov3目标检测神经网络实时定位标定球的球心,计算机械手末端中心在相机坐标系下的3D位置,同时运用奇异值分解方法求解机器人与相机坐标系转换矩阵的最小二乘解。在6自由度UR5机械臂和Intel RealSense D415深度相机上的实验结果表明,该标定方法无需辅助设备,转换后的空间点位置误差在2 mm以内,能较好满足一般视觉伺服智能机器人的抓取作业要求。  相似文献   

20.
We propose a new method for 3D object recognition which uses segment-based stereo vision. An object is identified in a cluttered environment and its position and orientation (6 dof) are determined accurately enabling a robot to pick up the object and manipulate it. The object can be of any shape (planar figures, polyhedra, free-form objects) and partially occluded by other objects. Segment-based stereo vision is employed for 3D sensing. Both CAD-based and sensor-based object modeling subsystems are available. Matching is performed by calculating candidates for the object position and orientation using local features, verifying each candidate, and improving the accuracy of the position and orientation by an iteration method. Several experimental results are presented to demonstrate the usefulness of the proposed method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号