首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Executing complex robotic tasks including dexterous grasping and manipulation requires a combination of dexterous robots, intelligent sensors and adequate object information processing. In this paper, vision has been integrated into a highly redundant robotic system consisting of a tiltable camera and a three-fingered dexterous gripper both mounted on a puma-type robot arm. In order to condense the image data of the robot working space acquired from the mobile camera, contour image processing is used for offline grasp and motion planning as well as for online supervision of manipulation tasks. The performance of the desired robot and object motions is controlled by a visual feedback system coordinating motions of hand, arm and eye according to the specific requirements of the respective situation. Experiences and results based on several experiments in the field of service robotics show the possibilities and limits of integrating vision and tactile sensors into a dexterous hand-arm-eye system being able to assist humans in industrial or servicing environments.  相似文献   

2.
Hand posture and force, which define aspects of the way an object is grasped, are features of robotic manipulation. A means for specifying these grasping “flavors” has been developed that uses an instrumented glove equipped with joint and force sensors. The new grasp specification system will be used at the Pennsylvania State University (Penn State) in a Virtual Reality based Point-and-Direct (VR-PAD) robotics implementation. Here, an operator gives directives to a robot in the same natural way that human may direct another. Phrases such as “put that there” cause the robot to define a grasping strategy and motion strategy to complete the task on its own. In the VR-PAD concept, pointing is done using virtual tools such that an operator can appear to graphically grasp real items in live video. Rather than requiring full duplication of forces and kinesthetic movement throughout a task as is required in manual telemanipulation, hand posture and force are now specified only once. The grasp parameters then become object flavors. The robot maintains the specified force and hand posture flavors for an object throughout the task in handling the real workpiece or item of interest  相似文献   

3.
A major goal of robotics research is to develop techniques that allow non-experts to teach robots dexterous skills. In this paper, we report our progress on the development of a framework which exploits human sensorimotor learning capability to address this aim. The idea is to place the human operator in the robot control loop where he/she can intuitively control the robot, and by practice, learn to perform the target task with the robot. Subsequently, by analyzing the robot control obtained by the human, it is possible to design a controller that allows the robot to autonomously perform the task. First, we introduce this framework with the ball-swapping task where a robot hand has to swap the position of the balls without dropping them, and present new analyses investigating the intrinsic dimension of the ball-swapping skill obtained through this framework. Then, we present new experiments toward obtaining an autonomous grasp controller on an anthropomorphic robot. In the experiments, the operator directly controls the (simulated) robot using visual feedback to achieve robust grasping with the robot. The data collected is then analyzed for inferring the grasping strategy discovered by the human operator. Finally, a method to generalize grasping actions using the collected data is presented, which allows the robot to autonomously generate grasping actions for different orientations of the target object.  相似文献   

4.
Assistance is currently a pivotal research area in robotics, with huge societal potential. Since assistant robots directly interact with people, finding natural and easy-to-use user interfaces is of fundamental importance. This paper describes a flexible multimodal interface based on speech and gesture modalities in order to control our mobile robot named Jido. The vision system uses a stereo head mounted on a pan-tilt unit and a bank of collaborative particle filters devoted to the upper human body extremities to track and recognize pointing/symbolic mono but also bi-manual gestures. Such framework constitutes our first contribution, as it is shown, to give proper handling of natural artifacts (self-occlusion, camera out of view field, hand deformation) when performing 3D gestures using one or the other hand even both. A speech recognition and understanding system based on the Julius engine is also developed and embedded in order to process deictic and anaphoric utterances. The second contribution deals with a probabilistic and multi-hypothesis interpreter framework to fuse results from speech and gesture components. Such interpreter is shown to improve the classification rates of multimodal commands compared to using either modality alone. Finally, we report on successful live experiments in human-centered settings. Results are reported in the context of an interactive manipulation task, where users specify local motion commands to Jido and perform safe object exchanges.  相似文献   

5.
A new approach is described for geometric calibration of Cartesian robots. This is part of a set of procedures for real-time 3-D robotics eye, eye-to-hand, and hand calibration which uses a common setup and calibration object, common coordinate systems, matrices, vectors, symbols, and operations and is especially suited to machine vision systems. The robot makes a series of automatically planned movement with a camera rigidly mounted at the gripper. At the end of each move, it takes a total of 90 ms to grab an image, extract image feature coordinates, and perform camera-extrinsic calibration. After the robot finishes all the movements, it takes only a few milliseconds to do the calibration. The key of this technique is that only one rotary joint is moving for each movement. This allows the calibration parameters to be fully decoupled, and converts a multidimensional problem into a series of one-dimensional problems. Another key is that eye-to-hand transformation is not needed at all during the computation  相似文献   

6.
Grasp capability analysis of multifingered robot hands   总被引:2,自引:0,他引:2  
This paper addresses the problem of grasp capability analysis of multifingered robot hands. The aim of the grasp capability analysis is to find the maximum external wrench that the multifingered robot hands can withstand, which is an important criterion in the evaluation of robotic systems. The study of grasp capability provides a basis for the task planning of force control of multifingered robot hands. For a given multifingered hand geometry, the grasp capability depends on the joint driving torque limits, grasp configuration, contact model and so on. A systematic method of the grasp capability analysis, which is in fact a constrained optimization algorithm, is presented. In this optimization, the optimality criterion is the maximum external wrench, and the constraints include the equality constraints and the inequality constraints. The equality constraints are for the grasp to balance the given external wrench, and the inequality constraints are to prevent the slippage of fingertips, the overload of joint actuators, the excessive forces over the physical limits of the object, etc. The advantages of this method are the ability to accomodate diverse areas such as multiple robot arms, intelligent fixtures and so on. The effectiveness of the proposed method is confirmed with a numerical example of a trifingered grasp.  相似文献   

7.
In this article, we introduce the extended grasp wrench space (GWS) to identify the applied location and the magnitude of the critical external wrench. We jointly use the task wrench space (TWS), which is made up of all possible external wrenches produced from the unit normal force to the surface points of the grasped object (i.e., the object wrench space [OWS]). In the extended GWS, the torque bound of each joint of the robot hand is considered to determine the grasp capability. Through convexity analysis and a linear programming technique, we propose a new way of obtaining an enhanced grasp measure which shows a clear physical meaning. We verify the proposed grasp analysis and quality measure using a visual grasp simulator and polygonal objects of various shapes.  相似文献   

8.
多用途欠驱动手爪的自主抓取研究   总被引:4,自引:0,他引:4  
骆敏舟  梅涛  卢朝洪 《机器人》2005,27(1):20-25
对欠驱动手爪自主抓取进行了研究,将其分为自主决策和抓取控制两个过程.首先分析了欠驱动手爪的特点、主要的抓取模式,并借鉴人的抓取经验,采用模糊输入方法,综合考虑抓取任务要求和物体本身的特征属性,利用模糊神经网络良好的分类特性选择合适的抓取模式.在此基础上,完成手指姿势调整,采用基于传感器反馈的控制策略,在被抓物体上形成的合适的力分布以获得稳定抓取,并通过抓取实例验证了抓取决策和控制的正确性,提高了欠驱动手爪抓取的自动化水平.  相似文献   

9.
《自动化学报》1999,25(5):1
This paper presents a hierarchical control system for robot multifingered coordinate manipulation. Given a manipulation,the task planner generates a sequence of object's motion velocities at first,and then generates for coordinate motion the desired velocities of finger's motion and desired orientation change of the grasped object according to the desired velocities of object's motion.At the same time,the force planner generates the grasp forces on the fingers in order to resist the external forces on the object,according to the grasp posture.Finally,the system generates a result compliance velocity from both the desired finger's velocities and desired grasp forces,and transfers it into joint velocites through the finger's inverse Jacobian.Then the controller of joint motion implements the control of both forces and velocities for the fingers.The approach has been applied to the development of control system HKUST dexterous hand successfully.Experiment results show that it is not only possible to trail and control the object's track,but also possible to realize force control and the hybrid control of both forces and velocities through this method.  相似文献   

10.
In order for a binocular head to perform optimal 3D tracking, it should be able to verge its cameras actively, while maintaining geometric calibration. In this work we introduce a calibration update procedure, which allows a robotic head to simultaneously fixate, track, and reconstruct a moving object in real-time. The update method is based on a mapping from motor-based to image-based estimates of the camera orientations, estimated in an offline stage. Following this, a fast online procedure is presented to update the calibration of an active binocular camera pair. The proposed approach is ideal for active vision applications because no image-processing is needed at runtime for the scope of calibrating the system or for maintaining the calibration parameters during camera vergence. We show that this homography-based technique allows an active binocular robot to fixate and track an object, whilst performing 3D reconstruction concurrently in real-time.  相似文献   

11.
机器人多指操作的递阶控制   总被引:1,自引:0,他引:1  
为机器人多指协调操作建立一递阶控制系统.给定一操作任务,任务规划器首先生 成一系列物体的运动速度;然后,协调运动规划器根据期望的物体运动速度生成期望的手指 运动速度和期望的抓取姿态变化;同时,抓取力规划器为平衡作用在物体上的外力,根据当前 的抓取姿态,生成各手指所需的抓取力;最后,系统将手指的期望运动速度与为实现期望抓取 力而生成的顺应速度合并,并通过手指的逆雅可比转化为手指关节运动速度后,由手指的关 节级运动控制器实现手指的运动和抓取力的控制.该控制方法已成功应用于香港科技大学 (HKUST)灵巧手控制系统的开发.实验证明该方法不仅能完成物体轨迹的跟踪控制任务, 而且能完成物体对环境的力控制和力与速度的混合控制.  相似文献   

12.
Despite the advancements in machine learning and artificial intelligence, there are many tooling tasks with cognitive aspects that are rather challenging for robots to handle in full autonomy, thus still requiring a certain degree of interaction with a human operator. In this paper, we propose a theoretical framework for both planning and execution of robot-surface contact tasks whereby interaction with a human operator can be accommodated to a variable degree.The starting point is the geometry of surface, which we assume known and available in a discretized format, e.g. through scanning technologies. To allow for realtime computation, rather than interacting with thousands of vertices, the robot only interacts with a single proxy, i.e. a massless virtual object constrained to ‘live on’ the surface and subject to first order viscous dynamics. The proxy and an impedance-controlled robot are then connected through tuneable and possibly viscoelastic coupling, i.e. (virtual) springs and dampers. On the one hand, the proxy slides along discrete geodesics of the surface in response to both viscoelastic coupling with the robot and to a possible external force (a virtual force which can be used to induce autonomous behaviours). On the other hand, the robot is free to move in 3D in reaction to the same viscoelastic coupling as well as to a possible external force, which includes an actual force exerted by a human operator. The proposed approach is multi-objective in the sense that different operational (autonomous/collaborative) and interactive (for contact/non-contact tasks) modalities can be realized by simply modulating the viscoelastic coupling as well as virtual and physical external forces. We believe that our proposed framework might lead to a more intuitive interfacing to robot programming, as opposed to standard coding. To this end, we also present numerical and experimental studies demonstrating path planning as well as autonomous and collaborative interaction for contact tasks with a free-form surface.  相似文献   

13.
Visually guided grasping in unstructured environments   总被引:3,自引:0,他引:3  
We present simple and robust algorithms which combine uncalibrated stereo vision and a robot manipulator to enable it locate, reach and grasp unmodelled objects in unstructured environments. In the first stage, an operator indicates the object to be grasped by simply pointing at it. Next, the vision system segments the indicated object from the background, and plans a suitable grasp strategy. Finally, the robotic arm reaches out towards the object and executes the grasp. Uncalibrated stereo vision allows the system to continue to operate in the presence of errors in the kinematics of the robot manipulator and unknown changes in the position, orientation and intrinsic parameters of the stereo cameras during operation.  相似文献   

14.
Vision-based remote control of cellular robots   总被引:1,自引:0,他引:1  
This paper describes the development and design of a vision-based remote controlled cellular robot. Cellular robots have numerous applications in industrial problems where simple inexpensive robots can be used to perform different tasks that involve covering a large working space. As a methodology, the robots are controlled based on the visual input from one or more cameras that monitor the working area. As a result, a robust control of the robot trajectory is achieved without depending on the camera calibration. The remote user simply specifies a target point in the image to indicate the robot final position.

We describe the complete system at various levels: the visual information processing, the robot characteristics and the closed loop control system design, including the stability analysis when the camera location is unknown. Results are presented and discussed.

In our opinion, such a system may have a wide spectrum of applications in industrial robotics and may also serve as an educational testbed for advanced students in the fields of vision, robotics and control.  相似文献   


15.
This paper addresses visual object perception applied to mobile robotics. Being able to perceive household objects in unstructured environments is a key capability in order to make robots suitable to perform complex tasks in home environments. However, finding a solution for this task is daunting: it requires the ability to handle the variability in image formation in a moving camera with tight time constraints. The paper brings to attention some of the issues with applying three state of the art object recognition and detection methods in a mobile robotics scenario, and proposes methods to deal with windowing/segmentation. Thus, this work aims at evaluating the state-of-the-art in object perception in an attempt to develop a lightweight solution for mobile robotics use/research in typical indoor settings.  相似文献   

16.
An approach to the task of Programming by demonstration (PbD) of grasping skills is introduced, where a mobile service robot is taught by a human instructor how to grasp a specific object. In contrast to other approaches the instructor demonstrates the grasping action several times to the robot to increase reconstruction performance. Only the robot’s stereoscopic vision system is used to track the instructor’s hand. The developed tracking algorithm is designed to not need artificial markers, data gloves or being restricted to fixed or difficult to calibrate sensor installations while at the same time being real-time capable on a mobile service robot with limited resources. Due to the instructor’s repeated demonstrations and his low repeating accuracy, every time a grasp is demonstrated the instructor performs it differently. To compensate for these variations and also to compensate for tracking errors, the use of a Self-Organizing-Map (SOM) with a one-dimensional topology is proposed. This SOM is used to generalize over differently demonstrated grasping actions and to reconstruct the intended approach trajectory of the instructor’s hand while grasping an object. The approach is implemented and evaluated on the service robot TASER using synthetically generated data as well as real world data.  相似文献   

17.
The paper describes the accurate calibration of the camera transformation for a vision system consisting of a camera mounted on a robot. The calibration includes an analysis of the linearity of the camera. A knowledge of the camera transformation allows the three-dimensional position of the object points to be determined using triangulation.  相似文献   

18.
针对机器人领域应用视觉进行目标物体抓取问题,提出了一种针对多目标背景下,新的深度优化处理方法.通过设定一个阈值块,以遍历成块的深度信息用类似聚类的方法,提出目标物体的具体坐标,传递给机器人手臂,完成准确的抓取操作.依次介绍了双目视觉原理、摄像机标定、双目矫正和双目匹配等内容,以及呈现出原始的深度信息图以及优化后的深度信息图,比较它们的差距.最后在实验中给出了证明:此种深度信息优化方法能够有效的提高机器人抓取目标物体的成功率.最后,还在文章最后给出了下一步的研究方向.  相似文献   

19.
From an early stage in their development, human infants show a profound drive to explore the objects around them. Research in psychology has shown that this exploration is fundamental for learning the names of objects and object categories. To address this problem in robotics, this paper presents a behavior-grounded approach that enables a robot to recognize the semantic labels of objects using its own behavioral interaction with them. To test this method, our robot interacted with 100 different objects grouped according to 20 different object categories. The robot performed 10 different behaviors on them, while using three sensory modalities (vision, proprioception and audio) to detect any perceptual changes. The results show that the robot was able to use multiple sensorimotor contexts in order to recognize a large number of object categories. Furthermore, the category recognition model presented in this paper was able to identify sensorimotor contexts that can be used to detect specific categories. Most importantly, the robot’s model was able to reduce exploration time by half by dynamically selecting which exploratory behavior should be applied next when classifying a novel object.  相似文献   

20.
Robust camera pose and scene structure analysis for service robotics   总被引:1,自引:0,他引:1  
Successful path planning and object manipulation in service robotics applications rely both on a good estimation of the robot’s position and orientation (pose) in the environment, as well as on a reliable understanding of the visualized scene. In this paper a robust real-time camera pose and a scene structure estimation system is proposed. First, the pose of the camera is estimated through the analysis of the so-called tracks. The tracks include key features from the imaged scene and geometric constraints which are used to solve the pose estimation problem. Second, based on the calculated pose of the camera, i.e. robot, the scene is analyzed via a robust depth segmentation and object classification approach. In order to reliably segment the object’s depth, a feedback control technique at an image processing level has been used with the purpose of improving the robustness of the robotic vision system with respect to external influences, such as cluttered scenes and variable illumination conditions. The control strategy detailed in this paper is based on the traditional open-loop mathematical model of the depth estimation process. In order to control a robotic system, the obtained visual information is classified into objects of interest and obstacles. The proposed scene analysis architecture is evaluated through experimental results within a robotic collision avoidance system.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号