首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Complex robot tasks are usually described as high level goals, with no details on how to achieve them. However, details must be provided to generate primitive commands to control a real robot. A sensor explication concept that makes details explicit from general commands is presented. We show how the transformation from high-level goals to primitive commands can be performed at execution time and we propose an architecture based on reconfigurable objects that contain domain knowledge and knowledge about the sensors and actuators available. Our approach is based on two premises: 1) plan execution is an information gathering process where determining what information is relevant is a great part of the process; and 2) plan execution requires that many details are made explicit. We show how our approach is used in solving the task of moving a robot to and through an unknown, and possibly narrow, doorway; where sonic range data is used to find the doorway, walls, and obstacles. We illustrate the difficulty of such a task using data from a large number of experiments we conducted with a real mobile robot. The laboratory results illustrate how the proper application of knowledge in the integration and utilization of sensors and actuators increases the robustness of plan execution.  相似文献   

2.
Automated assembly planning in a manufacturing environment requires not only mathematically sound formal methods and algorithmic computations but also heuristic knowledge. Much of this knowledge can be extracted from manual task manipulation strategies, such as the motion classification scheme used in methods-time measurement (MTM) studies. In this paper, we delineate various task-level operations in the context of robotic assembly, and show how these operations can be organized in the form of a task grammar. The proposed task grammar captures the intrinsic principle on how the sequence of robot operations should be ordered and how one high-level operation can be effectively decomposed into low-level operations. In order to control the process of robot task decomposition, we explicitly represent and apply qualitative heuristic knowledge about task constraints and operation applicability. In the paper, we first describe how syntactical knowledge about robot operations can be formulated for assembly-related manipulation tasks. Next, through illustrative examples, we attempt to show how qualitative knowledge can be effectively used in the task decomposition in three distinct ways: heuristic-based operation pattern matching, spatial-feature-based qualitative state envisionment, and canonical transformation of task environments.  相似文献   

3.
ROGUE is an architecture built on a real robot which provides algorithms for the integration of high-level planning, low-level robotic execution, and learning. ROGUE addresses successfully several of the challenges of a dynamic office gopher environment. This article presents the techniques for the integration of planning and execution.ROGUE uses and extends a classical planning algorithm to create plans for multiple interacting goals introduced by asynchronous user requests. ROGUE translates the planner';s actions to robot execution actions and monitors real world execution. ROGUE is currently implemented using the PRODIGY4.0 planner and the Xavier robot. This article describes how plans are created for multiple asynchronous goals, and how task priority and compatibility information are used to achieve appropriate efficient execution. We describe how ROGUE communicates with the planner and the robot to interleave planning with execution so that the planner can replan for failed actions, identify the actual outcome of an action with multiple possible outcomes, and take opportunities from changes in the environment.ROGUE represents a successful integration of a classical artificial intelligence planner with a real mobile robot.  相似文献   

4.
For a long time, robot assembly programming has been produced in two environments: on-line and off-line. On-line robot programming uses the actual robot for the experiments performing a given task; off-line robot programming develops a robot program in either an autonomous system with a high-level task planner and simulation or a 2D graphical user interface linked to other system components. This paper presents a whole hand interface for more easily performing robotic assembly tasks in the virtual tenvironment. The interface is composed of both static hand shapes (states) and continuous hand motions (modes). Hand shapes are recognized as discrete states that trigger the control signals and commands, and hand motions are mapped to the movements of a selected instance in real-time assembly. Hand postures are also used for specifying the alignment constraints and axis mapping of the hand-part coordinates. The basic virtual-hand functions are constructed through the states and modes developing the robotic assembly program. The assembling motion of the object is guided by the user immersed in the environment to a path such that no collisions will occur. The fine motion in controlling the contact and ending position/orientation is handled automatically by the system using prior knowledge of the parts and assembly reasoning. One assembly programming case using this interface is described in detail in the paper.  相似文献   

5.
The SENARIO project is develoing a sensor-aided intelligent navigation system that provides high-level navigational aid to users of powered wheelchairs. The authors discuss new and improved technologies developed within SENARIO concerning task/path planning, sensing and positioning for indoor mobile robots as well as user interface issues. The autonomous mobile robot SENARIO, supports semi- or fully autonomous navigation. In semi-autonomous mode the system accepts typical motion commands through a voice-activated or standard joystick interface and supports robot motion with obstacle/collision avoidance features. Fully autonomous mode is a superset of semi-autonomous mode with the additional ability to execute autonomously high-level go-to-goal commands. At its current stage, the project has succeeded in fully supporting semi-autonomous navigation, while experiments on the fully autonomous mode are very encouraging  相似文献   

6.
7.
A growing body of literature shows that endowing a mobile robot with semantic knowledge and with the ability to reason from this knowledge can greatly increase its capabilities. In this paper, we present a novel use of semantic knowledge, to encode information about how things should be, i.e. norms, and to enable the robot to infer deviations from these norms in order to generate goals to correct these deviations. For instance, if a robot has semantic knowledge that perishable items must be kept in a refrigerator, and it observes a bottle of milk on a table, this robot will generate the goal to bring that bottle into a refrigerator. The key move is to properly encode norms in an ontology so that each norm violation results in a detectable inconsistency. A goal is then generated to bring the world back in a consistent state, and a planner is used to transform this goal into actions. Our approach provides a mobile robot with a limited form of goal autonomy: the ability to derive its own goals to pursue generic aims. We illustrate our approach in a full mobile robot system that integrates a semantic map, a knowledge representation and reasoning system, a task planner, and standard perception and navigation routines.  相似文献   

8.
9.
This paper describes a situated reasoning architecture and a programming implementation for controlling robots in naturally changing environments. The reactive portion of the architecture produces reaction plans that exploit low-level competences as operators. The low-level competences include both obstacle avoidance heuristics and control-theoretic algorithms for generating and following a velocity/acceleration trajectory. Implemented in the GAPPS/REX situated automata programming language, robot goals can be specified logically and can be compiled into runtime virtual circuits that map sensor information into actuator commands in a fashion that allows for parallel execution. Detailed programs are described for controlling underwater vehicles being developed at the Woods Hole Oceanographic Institution, specifically the Remotely Piloted Vehicle (RPV), its successor, the Hylas, and eventually the Autonomous Benthic Explorer (ABE). Experiments with the RPV in a test tank are described in detail and will be duplicated with the Hylas. The experiments show that the robot performed both pilotaided and autonomous exploration tasks while accommodating normal changes in the task environment. ABE programs are described to illustrate how the reaction plans can be used in tasks more complex than those in the RPV experiments. The ABE is required to gather scientific data from deep ocean phenomena (e.g., thermal vents) which occur sporadically over long periods of time. A test tank simulation is described wherein the architecture is shown to easily generate robust vehicle control schemes which gather the required thermal vent data for a variety of vents of varying positions, velocities and extents.  相似文献   

10.
Dynamic Behavior Sequencing for Hybrid Robot Architectures   总被引:1,自引:0,他引:1  
Hybrid robot control architectures separate planning, coordination, and sensing and acting into separate processing layers to provide autonomous robots both deliberative and reactive functionality. This approach results in systems that perform well in goal-oriented and dynamic environments. Often, the interfaces and intents of each functional layer are tightly coupled and hand coded so any system change requires several changes in the other layers. This work presents the dynamic behavior hierarchy generation (DBHG) algorithm, which uses an abstract behavior representation to automatically build a behavior hierarchy for meeting a task goal. The generation of the behavior hierarchy occurs without knowledge of the low-level implementation or the high-level goals the behaviors achieve. The algorithm’s ability to automate the behavior hierarchy generation is demonstrated on a robot task of target search, identification, and extraction. An additional simulated experiment in which deliberation identifies which sensors to use to conserve power shows that no system modification or predefined task structures is required for the DBHG to dynamically build different behavior hierarchies.  相似文献   

11.
Deliberate control of an entertainment robot presents a special problem in balancing the requirement for intentional behavior with the existing mechanisms for autonomous action selection. It is proposed that the intentional biasing of activation in lower-level reactive behaviors is the proper mechanism for realizing such deliberative action. In addition, it is suggested that directed intentional bias can result in goal-oriented behavior without subsuming the underlying action selection used to generate natural behavior. This objective is realized through a structure called the intentional bus. The intentional bus serves as the interface between deliberative and reactive control by realizing high-level goals through the modulation of intentional signals sent to the reactive layer. A deliberative architecture that uses the intentional bus to realize planned behavior is described. In addition, it is shown how the intentional bus framework can be expanded to support the serialization of planned behavior by shifting from direct intentional influence for plan execution to attentional triggering of a learned action sequence. Finally, an implementation of this architecture, developed and tested on Sony’s humanoid robot QRIO, is described.  相似文献   

12.
Explaining a knowledge system's conclusions requires, among other things, showing how it has satisfied the logical requirements of its problem-solving task. In one report it was shown how task-based explanation worked in generic tasks (GT); other GT work showed how more complex tasks could be solved using multiple GT modules. Because each GT in a composite GT knowledge system serves a role defined by the higher-level goals it serves (i.e., a GT module is a method that achieves a particular goal of the higher-level task) it is possible to explain the high-level task in terms of the low-level GT explanations. We have introduced the idea of a logical structure for diagnosis, a task solved by composition of several GTs, and its use in explanation. In this article we describe how these ideas work to generate explanations in a composite GT system.  相似文献   

13.
Previous research has shown that sensor–motor tasks in mobile robotics applications can be modelled automatically, using NARMAX system identification, where the sensory perception of the robot is mapped to the desired motor commands using non-linear polynomial functions, resulting in a tight coupling between sensing and acting — the robot responds directly to the sensor stimuli without having internal states or memory.However, competences such as for instance sequences of actions, where actions depend on each other, require memory and thus a representation of state. In these cases a simple direct link between sensory perception and the motor commands may not be enough to accomplish the desired tasks. The contribution of this paper to knowledge is to show how fundamental, simple NARMAX models of behaviour can be used in a bootstrapping process to generate complex behaviours that were so far beyond reach.We argue that as the complexity of the task increases, it is important to estimate the current state of the robot and integrate this information into the system identification process. To achieve this we propose a novel method which relates distinctive locations in the environment to the state of the robot, using an unsupervised clustering algorithm. Once we estimate the current state of the robot accurately, we combine the state information with the perception of the robot through a bootstrapping method to generate more complex robot tasks: We obtain a polynomial model which models the complex task as a function of predefined low level sensor–motor controllers and raw sensory data.The proposed method has been used to teach Scitos G5 mobile robots a number of complex tasks, such as advanced obstacle avoidance, or complex route learning.  相似文献   

14.
温遇华  卢桂章  赵新 《机器人》2003,25(4):331-334
本文提出了一种基于原语的微操作机器人智能控制方法.首先,给出了面向对象的软件体系 结构;然后,在智能机器人分层递阶的层次结构基础上,提出了原语控制的概念,通过原语 函数对系统模块的二次封装,把机器人任务简单抽象为任务原语序列,实现了基于原语的机 器学习和示教再现.实验证明,基于原语的控制方法可以有效地提高微操作机器人的智能和 自动化程度.  相似文献   

15.
Previously we presented a novel approach to program a robot controller based on system identification and robot training techniques. The proposed method works in two stages: first, the programmer demonstrates the desired behaviour to the robot by driving it manually in the target environment. During this run, the sensory perception and the desired velocity commands of the robot are logged. Having thus obtained training data we model the relationship between sensory readings and the motor commands of the robot using ARMAX/NARMAX models and system identification techniques. These produce linear or non-linear polynomials which can be formally analysed, as well as used in place of “traditional robot” control code.In this paper we focus our attention on how the mathematical analysis of NARMAX models can be used to understand the robot’s control actions, to formulate hypotheses and to improve the robot’s behaviour. One main objective behind this approach is to avoid trial-and-error refinement of robot code. Instead, we seek to obtain a reliable design process, where program design decisions are based on the mathematical analysis of the model describing how the robot interacts with its environment to achieve the desired behaviour. We demonstrate this procedure through the analysis of a particular task in mobile robotics: door traversal.  相似文献   

16.
17.
The specification of multi-agent organisations is typically based on high-level modelling languages so as to simplify the task of software designers. Interpreting such high-level specifications as part of the organisation management infrastructure (OMI) is a difficult and cumbersome task. Simpler and more efficient tools need to be used for this. Based on primitives such as norms and obligations, we introduce in this paper a Normative Programming Language (NPL)??a language dedicated to the development of normative programs. We present the interpreter for such a language and show how it can be used within an organisation management infrastructure. While designers and agents can still use a high-level organisational modelling language to specify and reason about the multi-agent organisation, the OMI interprets a simpler language. This is possible because the high-level specifications can be automatically translated into the simpler (normative) language. Our approach was used to develop an improved OMI for the Moise framework, as described in this paper. We also show how Moise??s organisation modelling language (with primitives such as roles, groups, and goals) can be translated into NPL programs. Finally, we briefly describe how this all has been implemented on top of ORA4MAS, the distributed artifact-based organisation management infrastructure for Moise.  相似文献   

18.
19.
20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号