首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The lack of a theory-based design methodology for mobile robot control programs means that control programs have to be developed through an empirical trial-and-error process. This can be costly, time consuming and error prone.In this paper we show how to develop a theory of robot–environment interaction, which would overcome the above problem. We show how we can model a mobile robot’s task (so-called “task identification”) using non-linear polynomial models (NARMAX), which can subsequently be formally analysed using established mathematical methods. This provides an understanding of the underlying phenomena governing the robot’s behaviour.Apart from the paper’s main objective of formally analysing robot–environment interaction, the task identification process has further benefits, such as the fast and convenient cross-platform transfer of robot control programs (“Robot Java”), parsimonious task representations (memory issues) and very fast control code execution times.  相似文献   

2.
Previous research has shown that sensor–motor tasks in mobile robotics applications can be modelled automatically, using NARMAX system identification, where the sensory perception of the robot is mapped to the desired motor commands using non-linear polynomial functions, resulting in a tight coupling between sensing and acting — the robot responds directly to the sensor stimuli without having internal states or memory.However, competences such as for instance sequences of actions, where actions depend on each other, require memory and thus a representation of state. In these cases a simple direct link between sensory perception and the motor commands may not be enough to accomplish the desired tasks. The contribution of this paper to knowledge is to show how fundamental, simple NARMAX models of behaviour can be used in a bootstrapping process to generate complex behaviours that were so far beyond reach.We argue that as the complexity of the task increases, it is important to estimate the current state of the robot and integrate this information into the system identification process. To achieve this we propose a novel method which relates distinctive locations in the environment to the state of the robot, using an unsupervised clustering algorithm. Once we estimate the current state of the robot accurately, we combine the state information with the perception of the robot through a bootstrapping method to generate more complex robot tasks: We obtain a polynomial model which models the complex task as a function of predefined low level sensor–motor controllers and raw sensory data.The proposed method has been used to teach Scitos G5 mobile robots a number of complex tasks, such as advanced obstacle avoidance, or complex route learning.  相似文献   

3.
Developing robust and reliable control code for autonomous mobile robots is difficult, because the interaction between a physical robot and the environment is highly complex, subject to noise and variation, and therefore partly unpredictable. This means that to date it is not possible to predict robot behaviour based on theoretical models. Instead, current methods to develop robot control code still require a substantial trial-and-error component to the software design process.This paper proposes a method of dealing with these issues by (a) establishing task-achieving sensor-motor couplings through robot training, and (b) representing these couplings through transparent mathematical functions that can be used to form hypotheses and theoretical analyses of robot behaviour.We demonstrate the viability of this approach by teaching a mobile robot to track a moving football and subsequently modelling this task using the NARMAX system identification technique.  相似文献   

4.
Within mobile robotics, one of the most dominant relationships to consider when implementing robot control code is the one between the robot’s sensors and its motors. When implementing such a relationship, efficiency and reliability are of crucial importance. The latter aspects often prove challenging due to the complex interaction between a robot and the environment in which it exists, frequently resulting in a time consuming iterative process where control code is redeveloped and tested many times before obtaining an optimal controller. In this paper, we address this challenge by implementing an alternative approach to control code generation, which first identifies the desired robot behaviour and represents the sensor-motor task algorithmically through system identification using the NARMAX modelling methodology. The control code is generated by task demonstration, where the sensory perception and velocities are logged and the relationship that exists between them is then modelled using system identification. This approach produces transparent control code through non-linear polynomial equations that can be mathematically analysed to obtain formal statements regarding specific inputs/outputs. We demonstrate this approach to control code generation and analyse its performance in dynamic environments.  相似文献   

5.
The operation of an autonomous mobile robot in a semi-structured environment is a complex, usually non-linear and partly unpredictable process. Lacking a theory of robot–environment interaction that allows the design of robot control code based on theoretical analysis, roboticists still have to resort to trial-and-error methods in mobile robotics.The RobotMODIC project aims to develop a theoretical understanding of a robot’s interaction with its environment, and uses system identification techniques to identify the system robot–task–environment. In this paper, we present two practical examples of the RobotMODIC process: mobile robot self-localisation and mobile robot training to achieve door traversal.In both examples, a transparent mathematical function is obtained that maps inputs–sensory perception in both cases–to output — location and steering velocity respectively. Analysis of the obtained models reveals further information about the way in which a task is achieved, the relevance of individual sensors, possible ways of obtaining more parsimonious models, etc.  相似文献   

6.
This paper outlines how it is possible to decompose a complex non-linear modelling problem into a set of simpler linear modelling problems. Local ARMAX models valid within certain operating regimes are interpolated to construct a global NARMAX (non-linear NARMAX) model. Knowledge of the system behaviour in terms of operating regimes is the primary basis for building such models, hence it should not be considered as a pure black-box approach, but as an approach that utilizes a limited amount of a priori system knowledge. It is shown that a large class of non-linear systems can be modelled in this way, and indicated how to decompose the systems range of operation into operating regimes. Standard system identification algorithms can be used to identify the NARMAX model, and several aspects of the system identification problem are discussed and illustrated by a simulation example.  相似文献   

7.
Motivated by the human autonomous development process from infancy to adulthood, we have built a robot that develops its cognitive and behavioral skills through real-time interactions with the environment. We call such a robot a developmental robot. In this paper, we present the theory and the architecture to implement a developmental robot and discuss the related techniques that address an array of challenging technical issues. As an application, experimental results on a real robot, self-organizing, autonomous, incremental learner (SAIL), are presented with emphasis on its audition perception and audition-related action generation. In particular, the SAIL robot conducts the auditory learning from unsegmented and unlabeled speech streams without any prior knowledge about the auditory signals, such as the designated language or the phoneme models. Neither available before learning starts are the actions that the robot is expected to perform. SAIL learns the auditory commands and the desired actions from physical contacts with the environment including the trainers.  相似文献   

8.
Simulation and teleoperation tools offer many advantages for the training or learning of technological subjects, such as flexibility in time‐tables and student access to expensive and limited equipment. In this paper, we present a new system for simulating and tele‐operating robot arms through the Internet, which allows many users to simulate and test positioning commands for a robot by means of a virtual environment, as well as execute the validated commands in a real remote robot of the same characteristics. The main feature of the system is its flexibility in managing different robots or including new robot models and equipment. © 2005 Wiley Periodicals, Inc.  相似文献   

9.
We developed a robot patient for patient transfer training for simulating a patient’s performance during patient transfer and for enabling nurses to practice their nursing skills on it. To realize the robot patient, we focused on addressing the problems of designing its limb actions to enable it to respond to nurses’ operations. RC servos and electromagnetic brakes were installed in the joints to enable the robot to simulate a patient’s limb actions, such as embracing and remaining standing. To enable the robot to automatically respond to nurses’ operations, an identification method for these operations was developed that used voice commands and the features of the limbs’ posture measured by angle sensors installed in the robot’s joints. The robot patient’s performance was examined by a control test in which four experienced nursing teachers performed patient transfer with the robot patient and a human-simulated patient. The results revealed that the robot patient could successfully simulate the actions of a patient’s limbs according to the nursing teachers’ operations and that it is suitable for nursing skill training.  相似文献   

10.
《Advanced Robotics》2013,27(2):229-244
In this paper a learning method is described which enables a conventional industrial robot to accurately execute the teach-in path in the presence of dynamical effects and high speed. After training the system is capable of generating positional commands that in combination with the standard robot controller lead the robot along the desired trajectory. The mean path deviations are reduced to a factor of 20 for our test configuration. For low speed motion the learned controllers' accuracy is in the range of the resolution of the positional encoders. The learned controller does not depend on specific trajectories. It acts as a general controller that can be used for non-recurring tasks as well as for sensor-based planned paths. For repetitive control tasks accuracy can be even increased. Such improvements are caused by a three level structure estimating a simple process model, optimal a posteriori commands, and a suitable feedforward controller, the latter including neural networks for the representation of nonlinear behaviour. The learning system is demonstrated in experiments with a Manutec R2 industrial robot. After training with only two sample trajectories the learned control system is applied to other totally different paths which are executed with high precision as well.  相似文献   

11.
When an agent program exhibits unexpected behaviour, a developer needs to locate the fault by debugging the agent’s source code. The process of fault localisation requires an understanding of how code relates to the observed agent behaviour. The main aim of this paper is to design a source-level debugger that supports single-step execution of a cognitive agent program. Cognitive agents execute a decision cycle in which they process events and derive a choice of action from their beliefs and goals. Current state-of-the-art debuggers for agent programs provide insight in how agent behaviour originates from this cycle but less so in how it relates to the program code. As relating source code to generated behaviour is an important part of the debugging task, arguably, a developer also needs to be able to suspend an agent program on code locations. We propose a design approach for single-step execution of agent programs that supports both code-based as well as cycle-based suspension of an agent program. This approach results in a concrete stepping diagram ready for implementation and is illustrated by a diagram for both the Goal and Jason agent programming languages, and a corresponding full implementation of a source-level debugger for Goal in the Eclipse development environment. The evaluation that was performed based on this implementation shows that agent programmers prefer a source-level debugger over a purely cycle-based debugger.  相似文献   

12.
In this paper, a voice activated robot arm with intelligence is presented. The robot arm is controlled with natural connected speech input. The language input allows a user to interact with the robot in terms which are familiar to most people. The advantages of speech activated robots are hands-free and fast data input operations. The proposed robot is capable of understanding the meaning of natural language commands. After interpreting the voice commands a series of control data for performing a tasks are generated. Finally the robot actually performs the task. Artificial Intelligence techniques are used to make the robot understand voice commands and act in the desired mode. It is also possible to control the robot using the keyboard input mode.  相似文献   

13.
We propose an approach to efficiently teach robots how to perform dynamic manipulation tasks in cooperation with a human partner. The approach utilises human sensorimotor learning ability where the human tutor controls the robot through a multi-modal interface to make it perform the desired task. During the tutoring, the robot simultaneously learns the action policy of the tutor and through time gains full autonomy. We demonstrate our approach by an experiment where we taught a robot how to perform a wood sawing task with a human partner using a two-person cross-cut saw. The challenge of this experiment is that it requires precise coordination of the robot’s motion and compliance according to the partner’s actions. To transfer the sawing skill from the tutor to the robot we used Locally Weighted Regression for trajectory generalisation, and adaptive oscillators for adaptation of the robot to the partner’s motion.  相似文献   

14.
15.

Most of today’s mobile robots operate in controlled environments prone to various unpredictable conditions. Programming or reprogramming of such systems is time-consuming and requires significant efforts by number of experts. One of the solutions to this problem is to enable the robot to learn from human teacher through demonstrations or observations. This paper presents novel approach that integrates Learning from Demonstrations methodology and chaotic bioinspired optimization algorithms for reproduction of desired motion trajectories. Demonstrations of the different trajectories to reproduce are gathered by human teacher while teleoperating the mobile robot in working environment. The learning (optimization) goal is to produce such sequence of mobile robot actuator commands that generate minimal error in the final robot pose. Four different chaotic methods are implemented, namely chaotic Bat Algorithm, chaotic Firefly Algorithm, chaotic Accelerated Particle Swarm Optimization and newly developed chaotic Grey Wolf Optimizer (CGWO). In order to determine the best map for CGWO, this algorithm is tested on ten benchmark problems using ten well-known chaotic maps. Simulations compare aforementioned algorithms in reproduction of two complex motion trajectories with different length and shape. Moreover, these tests include variation of population in swarm and demonstration examples. Real-world experiment on a nonholonomic mobile robot in indoor environment proves the applicability of the proposed approach.

  相似文献   

16.
Introducing Climax: A novel strategy to a tri-wheel spiral robot   总被引:1,自引:0,他引:1  
This paper describes a prototype and analytical studies of a tri-wheel spiral mobile robot. The robot can reach any desired point with a sequence of rotational movements. The robot has a simple actuation mechanism, consisting of three wheels mounted on a platform with axes fixed in 120° and a motor connected to each. Our approach introduces several new features such as simple repeated sequence of commands for steering and spiral motion, versus direct movement to target. The mathematical model of the robot is discussed, and a steering method is developed to achieve full motion capabilities. For a number of missions, it is shown experimentally that the proposed motion planning agrees well with the results.  相似文献   

17.
This paper is concerned with the problem of reactive navigation for a mobile robot in an unknown clustered environment. We will define reactive navigation as a mapping between sensory data and commands. Building a reactive navigation system means providing such a mapping. It can come from a family of predefined functions (like potential fields methods) or it can be built using ‘universal’ approximators (like neural networks). In this paper, we will consider another ‘universal’ approximator: fuzzy logic. We will explain how to choose the rules using a behaviour decomposition approach. It is possible to build a controller working quite well but the classical problems are still there: oscillations and local minima. Finally, we will conclude that learning is necessary for a robust navigation system and fuzzy logic is an easy way to put some initial knowledge in the system to avoid learning from zero.  相似文献   

18.
Neurophysiological experiments have shown that many motor commands in living systems are generated by coupled neural oscillators. To coordinate the oscillators and achieve a desired phase relation with desired frequency, the intrinsic frequencies of component oscillators and coupling strengths between them must be chosen appropriately. In this paper we propose learning models for coupled neural oscillators to acquire the desired intrinsic frequencies and coupling weights based on the instruction of the desired phase pattern or an evaluation function. The abilities of the learning rules were examined by computer simulations including adaptive control of the hopping height of a hopping robot. The proposed learning rule takes a simple form like a Hebbian rule. Studies on such learning models for neural oscillators will aid in the understanding of the learning mechanism of motor commands in living bodies.  相似文献   

19.
Scaffolding is a process of transferring learned skills to new and more complex tasks through arranged experience in open-ended development. In this paper, we propose a developmental learning architecture that enables a robot to transfer skills acquired in early learning settings to later more complex task settings. We show that a basic mechanism that enables this transfer is sequential priming combined with attention, which is also the driving mechanism for classical conditioning, secondary conditioning, and instrumental conditioning in animal learning. A major challenge of this work is that training and testing must be conducted in the same program operational mode through online, real-time interactions between the agent and the trainers. In contrast with former modeling studies, the proposed architecture does not require the programmer to know the tasks to be learned and the environment is uncontrolled. All possible perceptions and actions, including the actual number of classes, are not available until the programming is finished and the robot starts to learn in the real world. Thus, a predesigned task-specific symbolic representation is not suited for such an open-ended developmental process. Experimental results on a robot are reported in which the trainer shaped the behaviors of the agent interactively, continuously, and incrementally through verbal commands and other sensory signals so that the robot learns new and more complex sensorimotor tasks by transferring sensorimotor skills learned in earlier periods of open-ended development  相似文献   

20.
《Advanced Robotics》2013,27(1-2):207-232
In this paper, we provide the first demonstration that a humanoid robot can learn to walk directly by imitating a human gait obtained from motion capture (mocap) data without any prior information of its dynamics model. Programming a humanoid robot to perform an action (such as walking) that takes into account the robot's complex dynamics is a challenging problem. Traditional approaches typically require highly accurate prior knowledge of the robot's dynamics and environment in order to devise complex (and often brittle) control algorithms for generating a stable dynamic motion. Training using human mocap is an intuitive and flexible approach to programming a robot, but direct usage of mocap data usually results in dynamically unstable motion. Furthermore, optimization using high-dimensional mocap data in the humanoid full-body joint space is typically intractable. We propose a new approach to tractable imitation-based learning in humanoids without a robot's dynamic model. We represent kinematic information from human mocap in a low-dimensional subspace and map motor commands in this low-dimensional space to sensory feedback to learn a predictive dynamic model. This model is used within an optimization framework to estimate optimal motor commands that satisfy the initial kinematic constraints as best as possible while generating dynamically stable motion. We demonstrate the viability of our approach by providing examples of dynamically stable walking learned from mocap data using both a simulator and a real humanoid robot.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号