共查询到20条相似文献,搜索用时 31 毫秒
1.
Du-Ming Tsai 《Journal of Intelligent and Robotic Systems》1995,12(1):23-48
This research investigates a novel robot-programming approach that applies machine-vision techniques to generate a robot program automatically. The hand motions of a demonstrator are initially recorded as a long sequence of images using two CCD cameras. Machine-vision techniques are then used to recognize the hand motions in three-dimensional space including open, closed, grasp, release and move. The individual hand feature and its corresponding hand position in each sample image is translated to robot's manipulator-level instructions. Finally a robot plays back the task using the automatically generated program.A robot can imitate the hand motions demonstrated by a human master using the proposed machine-vision approach. Compared with the traditional leadthrough and structural programming-language methods, the robot's user will not have to physically move the robot arm through the desired motion sequence and learn complicated robot-programming languages. The approach is currently focused on the classification of hand features and motions of a human arm and, therefore, is restricted to simple pick-and-place applications. Only one arm of the human master can be presented in the image scene, and the master must not wear long-sleeved clothes during demonstration to prevent false identification. Analysis and classification of hand motions in a long sequence of images are time-consuming. The automatic robot programming currently developed is performed off-line. 相似文献
2.
SUN Hanqiu 《计算机科学技术学报》1996,11(3):286-295
3-D task space in modeling and animation is usually reduced to the separate control dimensions supported by conventional interactive devices.This limitation maps only partial view of the problem to the device space at a time,and results in tedious and unnatural interface of control.This paper uses the DataGlove interface for modeling and animating scene behaviors.The modeling interface selects,scales,rotates,translates,copies and deletes the instances of the primitives.These basic modeling processes are directly performed in the task space,using hand shapes and motions.Hand shapes are recognized as discrete states that trigger the commands,and hand motion are mapped to the movement of a selected instance.The interactions through hand interface place the user as a participant in the process of behavior simulation.Both event triggering and role switching of hand are experimented in simulation.The event mode of hand triggers control signals or commands through a menu interface.The object mode of hand simulates itself as an object whose appearance or motion influences the motions of other objects in scene.The involvement of hand creates a diversity of dynamic situations for testing variable scene behaviors.Our experiments have shown the potential use of this interface directly in the 3-D modeling and animation task space. 相似文献
3.
Robots are important in high-mix low-volume manufacturing because of their versatility and repeatability in performing manufacturing tasks. However, robots have not been widely used due to cumbersome programming effort and lack of operator skill. One significant factor prohibiting the widespread application of robots by small and medium enterprises (SMEs) is the high cost and necessary skill of programming and re-programming robots to perform diverse tasks. This paper discusses an Augmented Reality (AR) assisted robot programming system (ARRPS) that provides faster and more intuitive robot programming than conventional techniques. ARRPS is designed to allow users with little robot programming knowledge to program tasks for a serial robot. The system transforms the work cell of a serial industrial robot into an AR environment. With an AR user interface and a handheld pointer for interaction, users are free to move around the work cell to define 3D points and paths for the real robot to follow. Sensor data and algorithms are used for robot motion planning, collision detection and plan validation. The proposed approach enables fast and intuitive robotic path and task programming, and allows users to focus only on the definition of tasks. The implementation of this AR-assisted robot system is presented, and specific methods to enhance the performance of the users in carrying out robot programming using this system are highlighted. 相似文献
4.
M.C. Leu 《Robotics and Computer》1985,2(1):1-12
A robotics software “system” is defined here as one which allows robot users to program robot tasks in terms of key states of the task, instead of manipulator motions. It consists of two subsystems: a language system and a planning system. The language system involves the design of syntax and semantics of a robot programming language whereas the planning system determines specific manipulator movements for a given task defined in a task-level language. This paper describes the major components of a robotics software system and reviews principal research findings in the related aspects including programming languages, manipulator and world modelings, motion planning, and graphic simulation. Underlying research issues are addressed at the end. 相似文献
5.
This work develops a method for robot program synthesis. Currently, the programming task is one of the major hurdles of robotic application. Progress towards automatic synthesis of robot programs will ease industrial robot application. The proposed system provides a means towards automated (guided by knowledge) conversion of a user's request, expressed in natural language, to the appropriate conceptual model of the required task. This model incorporates the information necessary for understanding, planning, and sensory-guided performance of the required robotic task.First, we state the problem of robotic assembly and recognize its hierarchic structure as the structure of a system that builds a predesigned assembly. Next, we present and analyze the requirements of the robotic assembly domain. This analysis enables us to draw conclusions concerning the most suitable methodology for the development of a support system for assembly program synthesis and interpretation. It is the conceptual graph-based approach. Then we present the algorithm of the proposed conceptual graph-based system and show how the system synthesizes robotic assembly operations, such as valid assembly sequences and sequences of special treatments for the assembled components (including the determination of the required resources). Finally, a case study illustrates the approach developed here on a large family of multi-axisymmetric components. We also present illustrative examples of working sessions with the current implementation of the system. 相似文献
6.
7.
8.
ABSTRACT Skillful motions in the actual assembly process are challenging for the robot to generate with conventional motion planning approaches because some states during the human assembly can be too skillful to realize automatically due to the narrow passage. To deal with this problem, this paper develops a motion planning method using the human demonstration, which can be applied to complete skillful motions in the robotic assembly process. To demonstrate conveniently without redundant third-party devices, we attach augmented reality (AR) markers to the manipulated object to track and capture its poses during the human demonstration. To overcome the problem brought by the coarse resolution of the vision system, we extract the most important key poses from the demonstration data and employ them as clues to execute motion planning to suit the target precise task. As for the selection of key poses, two policies are compared, where the first and the second derivative of the main changing parameter of every key pose serve as criteria to determine the priority of utilizing key poses. Besides, a solution to deal with colliding key poses is also proposed. The effectiveness of the presented method is verified through some simulation examples and actual robot experiments. 相似文献
9.
C. C. Ruokangas W. A. Guthmiller B. L. Pierson K. E. Sliwinski J. M. F. Lee 《野外机器人技术杂志》1987,4(3):355-375
An intelligent, adaptive robotic system is being developed for a low-volume, high precision process: the welding of the Space Shuttle Main Engines (SSMEs). The overall system goals include the development of a workstation-based system to provide flexible, efficient off-line programming of both motion and welding process commands for the robotic workcell. This offline programming system includes: implementation of a user interface for weld engineers, generation of both geometric and process commands for the workcell, development of a weld parameter database system, and display of both run-time and archived data. This article presents details of the off-line programming system, including interfaces, implementation, and limitations. The system will be implemented on the SSME production floor, and will be benchmarked in effectiveness and productivity against commercially available robotic welding systems presently in use. The overall system development is a joint effort between NASA's Marshall Space Flight Center and two divisions of Rockwell International: Rocketdyne and the Science Center. 相似文献
10.
11.
ABSTRACTThere is a growing need for adaptive robotic assembly systems that are fast to setup and reprogram when new products are introduced. The World Robot Challenge at World Robot Summit 2018 was centered around the challenge of setting up a flexible robotic assembly system aiming at changeover times below 1 day. This paper presents a method for programming robotic assembly tasks that was initiated in connection with the World Robot Challenge that enables fast and easy setup of robotic insertion tasks.We propose to program assembly tasks by demonstration, but instead of using the taught behavior directly, the demonstration is merged with assembly primitives to increase robustness. In contrast to other programming by demonstration approaches, we perform not only one demonstration but a sequence of four sub-demonstrations that are used to extract the desired robot trajectory in addition to parameters for the assembly primitive.The proposed assembly strategy is compared to a standard dynamic movement primitive and experiments show that the proposed assembly strategy increases the robustness towards pose uncertainties and significantly reduces the applied forces during the execution of the assembly task. 相似文献
12.
A fail-safe tele-autonomous robotic system is proposed for use in advanced nuclear reprocessing facilities. The design exploits the technologies developed for space telerobotics. The target system consists of a graphical user interface for an operator to execute robotic tasks, hand controllers for teleoperation, a three-dimensional graphical simulator, and robot control software to drive both the graphical simulation and dual six degree-of-freedom robots to perform tasks using autonomous, teleoperated, and shared control modes. A preliminary design for a safety monitoring system for fail-safe operations is also described. 相似文献
13.
This paper describes an approach to estimating the progress in a task executed by a humanoid robot and to synthesizing motion based on the current progress so that the robot can achieve the task. The robot observes a human performing whole body motion for a specific task, and encodes these motions into a hidden Markov model (HMM). The current observation is compared with the motion generated by the HMM, and the task progress can be estimated during the robot performing the motion. The robot subsequently uses the estimate of the task progress to generate a motion appropriate to the current situation with the feedback rule. We constructed a bilateral remote control system with humanoid robot HRP-4 and haptic device Novint Falcon, and we made the humanoid robot push a button. Ten trial motions of pushing a button were recorded for the training data. We tested our proposed approach on the autonomous execution of the pushing motion by the humanoid robot, and confirmed the effectiveness of our task progress feedback method. 相似文献
14.
15.
This paper describes a complete vision-based online robot system that allows controlling both an educational and an industrial robot via web. We address some of the limitations of current similar systems particularly concerning the user interface. Some of its novel features are: its adjustable autonomy, so that the user can decide the right level of interaction from high-level voice commands down to mouse clicks, reducing in this way the cognitive fatigue resulting from remote operation; the interface is predictive, by using a 3D virtual environment endowed with augmented reality capabilities, the user can predict the results of the actions before sending the command to the real robot. Thus, network bandwidth is saved and off-line task specification is possible. This high level interaction is possible thanks to some built-in modules for performing basic tasks such as automatic object recognition, image processing, autonomous grasp determination and speech recognition. Finally, the system has been tested by means of an application in the Education and Training domain. One hundred undergraduate students have been using the web-based interface in order to program Pick and Place operations with the system. The results show performance, statistics, connection rates, and the students' opinions, as a way of evaluating the convenience and usability of the user interface. 相似文献
16.
Biegelbauer G. Pichler A. Vincze M. Nielsen C.L. Andersen H.J. Haeusler K. 《Robotics & Automation Magazine, IEEE》2005,12(3):24-34
Industrial painting automation with robots is very efficient and fast and often used in production lines. However, a big disadvantage is the off-line programming paradigm for the painting robots. This is time-consuming and can be justified economically only for large lot sizes. Hence, a totally new approach to robot programming is required to enable painting of small lot sizes. The objective of the FlexPaint project is to automate robot-programming applications of small lot sizes with a very high number of part variants. This article reports the new approach, referred to as an inverse approach that automatically generates the painting motion. This approach opens new markets for robotic applications. The automatic robot program generation enables, for the first time, painting parts of a lot size of one. The principle of this approach is based on formalizing the technological knowledge in a geometry library and a process library. Laser range sensors are used to obtain an accurate scan of the part. Process-relevant classes of features are detected as specified in the geometry library. Feature classes are linked in the process library to basic paint strategies, which are grouped to automatically generate the robot paint tool trajectory. Finally collisions-free and executable robot motions are automatically obtained for the actual robot kinematics. Painting results for several parts, e.g., different motors with gearboxes, would result with this new approach. 相似文献
17.
《Computers & Industrial Engineering》1998,34(2):423-431
There are three factors directly affecting robotic assembly efficiency: (i) robot motion control, (ii) the sequence for placing the individual component on the assembly board, and (iii) the corresponding magazine slot location from which the components must be selected. This paper describes an off-line heuristics to sequence the insertion orders and to assign corresponding components to a magazine so as to improve robotic assembly efficiency. The algorithms are developed for a Cartesian robot, which allows dynamic allocation of pick-and-place locations. The paper also demonstrates the operational procedures for an on-line implementation of this novel approach. The approach could be extended to assembly tasks for printed circuit boards. 相似文献
18.
Grantham K. H. Pang 《Journal of Intelligent and Robotic Systems》1989,2(4):425-444
This paper describes the use of the blackboard architecture for the off-line programming of an IMB 7565 Robot. A blackboard system was implemented in PROLOG and it has been applied successfully for the automatic generation of a control code for the robot to perform the task of block assembly in an environment with an obstacle. The opportunistic type of problem-solving offered by the blackboard architecture has succeeded in obtaining a solution. The user-interface to the system is represented as a knowledge source in the blackboard system, which allows the user to modify the goal specifications during the operation of the blackboard system. 相似文献
19.
20.
Myung Hwan Yun Cannon D. Freivalds A. Thomas G. 《IEEE transactions on systems, man, and cybernetics. Part B, Cybernetics》1997,27(5):835-846
Hand posture and force, which define aspects of the way an object is grasped, are features of robotic manipulation. A means for specifying these grasping “flavors” has been developed that uses an instrumented glove equipped with joint and force sensors. The new grasp specification system will be used at the Pennsylvania State University (Penn State) in a Virtual Reality based Point-and-Direct (VR-PAD) robotics implementation. Here, an operator gives directives to a robot in the same natural way that human may direct another. Phrases such as “put that there” cause the robot to define a grasping strategy and motion strategy to complete the task on its own. In the VR-PAD concept, pointing is done using virtual tools such that an operator can appear to graphically grasp real items in live video. Rather than requiring full duplication of forces and kinesthetic movement throughout a task as is required in manual telemanipulation, hand posture and force are now specified only once. The grasp parameters then become object flavors. The robot maintains the specified force and hand posture flavors for an object throughout the task in handling the real workpiece or item of interest 相似文献