首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
The lack of a theory-based design methodology for mobile robot control programs means that control programs have to be developed through an empirical trial-and-error process. This can be costly, time consuming and error prone.In this paper we show how to develop a theory of robot–environment interaction, which would overcome the above problem. We show how we can model a mobile robot’s task (so-called “task identification”) using non-linear polynomial models (NARMAX), which can subsequently be formally analysed using established mathematical methods. This provides an understanding of the underlying phenomena governing the robot’s behaviour.Apart from the paper’s main objective of formally analysing robot–environment interaction, the task identification process has further benefits, such as the fast and convenient cross-platform transfer of robot control programs (“Robot Java”), parsimonious task representations (memory issues) and very fast control code execution times.  相似文献   

2.
Developing robust and reliable control code for autonomous mobile robots is difficult, because the interaction between a physical robot and the environment is highly complex, subject to noise and variation, and therefore partly unpredictable. This means that to date it is not possible to predict robot behaviour based on theoretical models. Instead, current methods to develop robot control code still require a substantial trial-and-error component to the software design process.This paper proposes a method of dealing with these issues by (a) establishing task-achieving sensor-motor couplings through robot training, and (b) representing these couplings through transparent mathematical functions that can be used to form hypotheses and theoretical analyses of robot behaviour.We demonstrate the viability of this approach by teaching a mobile robot to track a moving football and subsequently modelling this task using the NARMAX system identification technique.  相似文献   

3.
Previous research has shown that sensor–motor tasks in mobile robotics applications can be modelled automatically, using NARMAX system identification, where the sensory perception of the robot is mapped to the desired motor commands using non-linear polynomial functions, resulting in a tight coupling between sensing and acting — the robot responds directly to the sensor stimuli without having internal states or memory.However, competences such as for instance sequences of actions, where actions depend on each other, require memory and thus a representation of state. In these cases a simple direct link between sensory perception and the motor commands may not be enough to accomplish the desired tasks. The contribution of this paper to knowledge is to show how fundamental, simple NARMAX models of behaviour can be used in a bootstrapping process to generate complex behaviours that were so far beyond reach.We argue that as the complexity of the task increases, it is important to estimate the current state of the robot and integrate this information into the system identification process. To achieve this we propose a novel method which relates distinctive locations in the environment to the state of the robot, using an unsupervised clustering algorithm. Once we estimate the current state of the robot accurately, we combine the state information with the perception of the robot through a bootstrapping method to generate more complex robot tasks: We obtain a polynomial model which models the complex task as a function of predefined low level sensor–motor controllers and raw sensory data.The proposed method has been used to teach Scitos G5 mobile robots a number of complex tasks, such as advanced obstacle avoidance, or complex route learning.  相似文献   

4.
针对移动机器人环境认知问题, 受老鼠海马体位置细胞在特定位置放电的启发, 构建动态增减位置细胞认知地图模型(Dynamic growing and pruning place cells-based cognitive map model, DGP-PCCMM), 使机器人在与环境交互的过程中自组织构建认知地图, 进行环境认知. 初始时刻, 认知地图由初始点处激活的位置细胞构成; 随着与环境的交互, 逐渐得到不同位置点处激活的位置细胞, 并建立其之间的连接关系, 实现认知地图的动态增长; 如果机器人在已访问区域发现新的障碍物, 利用动态缩减机制对认知地图进行更新. 此外, 提出一种位置细胞序列规划算法, 该算法以所构建的认知地图作为输入, 进行位置细胞序列规划, 实现机器人导航. 为验证模型的正确性和有效性, 对Tolman的经典老鼠绕道实验进行再现. 实验结果表明, 本文模型能使机器人在与环境交互的过程中动态构建并更新认知地图, 能初步完成对Tolman老鼠绕道实验的再现. 此外, 进行了与四叉树栅格地图、动态窗口法的对比实验和与其他认知地图模型的讨论分析. 结果表明了本文方法在所构建地图的简洁性、完整性和对动态障碍适应性方面的优势.  相似文献   

5.
A method for the remote control of a space robot is proposed for the case of large delays in the transmission of control signals from the Earth to the local robot control system and in feedback signals. The method involves the use of the model of the space robot and its current environment with the simulation of gravity conditions at the ground control center. In this model environment, the operator should carry out the required actions by controlling the space robot model in the master-slave mode using an arm with six degrees of freedom capable of reflecting the interaction force of a model robot working tool with models of the objects of the environment. The arm movement trajectory and the law of time variation of the reflected interaction force vector are program-based for the local space robot control system and should be executed by it upon reception from the ground control center. The robot’s possible erroneous actions generated by the inevitable inaccuracy of the environment model are compensated by the proposed method of programmed trajectory correction. In accordance with it, in order to generate correction signals, additional information received from different sensors is used. These sensors can be installed on both the model and space robot itself. This information includes data on the mutual position of a robot’s working tool and models of the objects of the environment, as well as on the interaction forces between them. The paper presents a detailed theoretical justification of the proposed approach and experimental results that confirm the theoretical conclusions.  相似文献   

6.
Traditionally, the main goal of teleoperation has been to successfully achieve a given task as if performing the task in local space. An emerging and related requirement is to also match the subjective sensation or the user experience of the remote environment, while maintaining reasonable task performance. This concept is often called “presence” or “(experiential) telepresence,” which is informally defined as “the sense of being in a mediated environment.” In this paper, haptic feedback is considered as an important element for providing improved presence and reasonable task performance in remote navigation. An approach for using haptic information to “experientially” teleoperate a mobile robot is described. Haptic feedback is computed from the range information obtained by a sonar array attached to the robot, and delivered to a user's hand via a haptic probe. This haptic feedback is provided to the user, in addition to stereoscopic images from a forward-facing stereo camera mounted on the mobile robot. The experiment with a user population in a real-world environment showed that haptic feedback significantly improved both task performance and user-felt presence. When considering user-felt presence, no interaction among haptic feedback, image resolution, and stereoscopy was observed, that is, haptic feedback was effective, regardless of the fidelity of visual elements. Stereoscopic images also significantly improved both task performance and user-felt presence, but high-resolution images only significantly improved user-felt presence. When considering task performance, however, it was found that there was an interaction between haptic feedback and stereoscopy, that is, stereoscopic images were only effective when no force feedback was applied. According to the multiple regression analysis, haptic feedback was a higher contributing factor to the improvement in performance and presence than image resolution and stereoscopy.  相似文献   

7.
8.
《Advanced Robotics》2013,27(8):743-758
Cognitive activity in intelligent robotic systems has often been modeled as a set of communicating intelligent distributed agents or modules. Some examples in this field are blackboard architectures, hybrid models or subsumption architectures. The rapid progress of communication technology offers the possibility of distributing computation not only on different processes but on a network of computers. This both results in greater available computational power and it allows the robot to merge with the environment it operates in. In suitable intelligent buildings a mobile robot may open doors, turn on/off lights or even avoid obstacles based not only on its sensors and actuators but on the interaction with other robotic entities. In addition the range of robot interactions is now only limited by the network and thus the robot can operate remotely on the environment. Similarly, users can issue commands to remote robots and receive feedback in real-time. In this paper we propose a global approach to distributing a robotic system over a computer network. The approach is named ETHNOS (Expert Tribe in a Hybrid Network Operating System) because it is based on a novel operating system we developed specifically for distributed intelligent robotics. The paper focuses on its characteristics that make it well suited for network robotics applications. It also illustrates an example of a real application in the field of mobile robotics.  相似文献   

9.
Previously we presented a novel approach to program a robot controller based on system identification and robot training techniques. The proposed method works in two stages: first, the programmer demonstrates the desired behaviour to the robot by driving it manually in the target environment. During this run, the sensory perception and the desired velocity commands of the robot are logged. Having thus obtained training data we model the relationship between sensory readings and the motor commands of the robot using ARMAX/NARMAX models and system identification techniques. These produce linear or non-linear polynomials which can be formally analysed, as well as used in place of “traditional robot” control code.In this paper we focus our attention on how the mathematical analysis of NARMAX models can be used to understand the robot’s control actions, to formulate hypotheses and to improve the robot’s behaviour. One main objective behind this approach is to avoid trial-and-error refinement of robot code. Instead, we seek to obtain a reliable design process, where program design decisions are based on the mathematical analysis of the model describing how the robot interacts with its environment to achieve the desired behaviour. We demonstrate this procedure through the analysis of a particular task in mobile robotics: door traversal.  相似文献   

10.
11.
Human–robot interaction during general service tasks in home or retail environment has been proven challenging, partly because (1) robots lack high-level context-based cognition and (2) humans cannot intuit the perception state of robots as they can for other humans. To solve these two problems, we present a complete robot system that has been given the highest evaluation score at the Customer Interaction Task of the Future Convenience Store Challenge at the World Robot Summit 2018, which implements several key technologies: (1) a hierarchical spatial concepts formation for general robot task planning and (2) a mixed reality interface to enable users to intuitively visualize the current state of the robot perception and naturally interact with it. The results obtained during the competition indicate that the proposed system allows both non-expert operators and end users to achieve human–robot interactions in customer service environments. Furthermore, we describe a detailed scenario including employee operation and customer interaction which serves as a set of requirements for service robots and a road map for development. The system integration and task scenario described in this paper should be helpful for groups facing customer interaction challenges and looking for a successfully deployed base to build on.  相似文献   

12.
Behaviour-based models have been widely used to represent mobile robotic systems, which operate in uncertain dynamic environments and combine information from several sensory sources. The specification of complex mobile robotic applications is performed in such models by combining deliberative goal-oriented planning with reactive sensor driven operations. Consequently, the design of mobile robotic architectures requires the combination of time-constrained activities with deliberate time-consuming components. Furthermore, the temporal requirements of the reactive activities are variable and dependent on the environment (i.e. recognition processes) and/or on application parameters (i.e. process frequencies depend on robot speed).In this paper, a real-time mobile robotic architecture to cope with the functional and variable temporal characteristics of behaviour-based mobile robotic applications is proposed. Run-time flexibility is a main feature of the architecture that supports the variability of the temporal characteristics of the workload. The system has to be adapted to the environmental conditions, by adjusting robot control parameters (i.e. speed) or the system load (i.e. computational time), and take actions depending on it. This influence is focused on the ability of the system to select the appropriate activity to be executed depending on the available time, and, to change its behaviour depending on the environmental information. The flexibility of the system is allowed thanks to the definition of a real-time task model and the design of adaptation mechanisms for the regulation of the reactive load.The improvement of the robot quality of service (QoS) is another important aspect discussed in the paper. The architecture incorporates a quality of service manager (QoSM) that allows dynamically monitor analyse and improve the robot performances. Depending on the internal state, on the environment and on the objectives, the robot performance requirements vary (i.e. when the environment is overloaded, global map processes generating high-quality maps are required). The QoSM receives the performance requirements of the robot, and by adjusting the reactive load, the system enables the necessary slack time to schedule the more suitable deliberative processes and hence fulfilling the robot QoS. Moreover, the deliberative load can be scheduled by different heuristic strategies that provide answers of varying quality.  相似文献   

13.
To fully utilize the information from the sensors of mobile robot, this paper proposes a new sensor‐fusion technique where the sample data set obtained at a previous instant is properly transformed and fused with the current data sets to produce a reliable estimate for navigation control. Exploration of an unknown environment is an important task for the new generation of mobile service robots. The mobile robots may navigate by means of a number of monitoring systems such as the sonar‐sensing system or the visual‐sensing system. Notice that in the conventional fusion schemes, the measurement is dependent on the current data sets only. Therefore, more sensors are required to measure a given physical parameter or to improve the reliability of the measurement. However, in this approach, instead of adding more sensors to the system, the temporal sequences of the data sets are stored and utilized for the purpose. The basic principle is illustrated by examples and the effectiveness is proved through simulations and experiments. The newly proposed STSF (space and time sensor fusion) scheme is applied to the navigation of a mobile robot in an environment using landmarks, and the experimental results demonstrate the effective performance of the system. © 2004 Wiley Periodicals, Inc.  相似文献   

14.
The use of a symbolic model of the spatial environment becomes crucial for a mobile robot that is intended to operate optimally and intelligently in indoor scenarios. Constructing such a model involves important problems that are not solved completely at present. One is called anchoring, which implies to maintain a correct dynamic correspondence between the real world and the symbols in the model. The other problem is adaptation: among the numerous possible models that could be constructed for representing a given environment, optimization involves the selection of one that improves as much as possible the operations of the robot. To cope with both problems, in this paper, we propose a framework that allows an indoor mobile robot to learn automatically a symbolic model of its environment and to optimize it over time with respect to changes in both the environment and the robot operational needs through an evolutionary algorithm. For coping efficiently with the large amounts of information that the real world provides, we use abstraction, which also helps in improving task planning. Our experiments demonstrate that the proposed framework is suitable for providing an indoor mobile robot with a good symbolic model and adaptation capabilities.  相似文献   

15.
Within mobile robotics, one of the most dominant relationships to consider when implementing robot control code is the one between the robot’s sensors and its motors. When implementing such a relationship, efficiency and reliability are of crucial importance. The latter aspects often prove challenging due to the complex interaction between a robot and the environment in which it exists, frequently resulting in a time consuming iterative process where control code is redeveloped and tested many times before obtaining an optimal controller. In this paper, we address this challenge by implementing an alternative approach to control code generation, which first identifies the desired robot behaviour and represents the sensor-motor task algorithmically through system identification using the NARMAX modelling methodology. The control code is generated by task demonstration, where the sensory perception and velocities are logged and the relationship that exists between them is then modelled using system identification. This approach produces transparent control code through non-linear polynomial equations that can be mathematically analysed to obtain formal statements regarding specific inputs/outputs. We demonstrate this approach to control code generation and analyse its performance in dynamic environments.  相似文献   

16.
In this paper, we present a novel cognitive framework allowing a robot to form memories of relevant traits of its perceptions and to recall them when necessary. The framework is based on two main principles: on the one hand, we propose an architecture inspired by current knowledge in human memory organisation; on the other hand, we integrate such an architecture with the notion of context, which is used to modulate the knowledge acquisition process when consolidating memories and forming new ones, as well as with the notion of familiarity, which is employed to retrieve proper memories given relevant cues. Although much research has been carried out, which exploits Machine Learning approaches to provide robots with internal models of their environment (including objects and occurring events therein), we argue that such approaches may not be the right direction to follow if a long-term, continuous knowledge acquisition is to be achieved. As a case study scenario, we focus on both robot–environment and human–robot interaction processes. In case of robot–environment interaction, a robot performs pick and place movements using the objects in the workspace, at the same time observing their displacement on a table in front of it, and progressively forms memories defined as relevant cues (e.g. colour, shape or relative position) in a context-aware fashion. As far as human–robot interaction is concerned, the robot can recall specific snapshots representing past events using both sensory information and contextual cues upon request by humans.  相似文献   

17.
18.
This paper presents a novel semi-autonomous navigation strategy designed for low throughput interfaces. A mobile robot (e.g. intelligent wheelchair) proposes the most probable action, as analyzed from the environment, to a human user who can either accept or reject the proposition. In the case of refusal, the robot will propose another action, until both entities agree on what needs to be done.In an unknown environment, the robotic system first extracts features so as to recognize places of interest where a human–robot interaction should take place (e.g. crossings). Based on the local topology, relevant actions are then proposed, the user providing answers by means of a button or a brain–computer interface (BCI). Our navigation strategy is successfully tested both in simulation and with a real robot, and a feasibility study for the use of a BCI confirms the potential of such an interface.  相似文献   

19.
In field environments it is not usually possible to provide robots in advance with valid geometric models of its task and environment. The robot or robot teams need to create these models by scanning the environment with its sensors. Here, an information-based iterative algorithm to plan the robot's visual exploration strategy is proposed to enable it to most efficiently build 3D models of its environment and task. The method assumes mobile robot (or vehicle) with vision sensors mounted at a manipulator end-effector (eye-in-hand system). This algorithm efficiently repositions the systems' sensing agents using an information theoretic approach and fuses sensory information using physical models to yield a geometrically consistent environment map. This is achieved by utilizing a metric derived from Shannon's information theory to determine optimal sensing poses for the agent(s) mapping a highly unstructured environment. This map is then distributed among the agents using an information-based relevant data reduction scheme. This method is particularly well suited to unstructured environments, where sensor uncertainty is significant. Issues addressed include model-based multiple sensor data fusion, and uncertainty and vehicle suspension motion compensation. Simulation results show the effectiveness of this algorithm.  相似文献   

20.
For the last decade, we have been developing a vision-based architecture for mobile robot navigation. Using our bio-inspired model of navigation, robots can perform sensory-motor tasks in real time in unknown indoor as well as outdoor environments. We address here the problem of autonomous incremental learning of a sensory-motor task, demonstrated by an operator guiding a robot. The proposed system allows for semisupervision of task learning and is able to adapt the environmental partitioning to the complexity of the desired behavior. A real dialogue based on actions emerges from the interactive teaching. The interaction leads the robot to autonomously build a precise sensory-motor dynamics that approximates the behavior of the teacher. The usability of the system is highlighted by experiments on real robots, in both indoor and outdoor environments. Accuracy measures are also proposed in order to evaluate the learned behavior as compared to the expected behavioral attractor. These measures, used first in a real experiment and then in a simulated experiment, demonstrate how a real interaction between the teacher and the robot influences the learning process.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号