首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Mobile robotics has achieved notable progress, however, to increase the complexity of the tasks that mobile robots can perform in natural environments, we need to provide them with a greater semantic understanding of their surrounding. In particular, identifying indoor scenes, such as an Office or a Kitchen, is a highly valuable perceptual ability for an indoor mobile robot, and in this paper we propose a new technique to achieve this goal. As a distinguishing feature, we use common objects, such as Doors or furniture, as a key intermediate representation to recognize indoor scenes. We frame our method as a generative probabilistic hierarchical model, where we use object category classifiers to associate low-level visual features to objects, and contextual relations to associate objects to scenes. The inherent semantic interpretation of common objects allows us to use rich sources of online data to populate the probabilistic terms of our model. In contrast to alternative computer vision based methods, we boost performance by exploiting the embedded and dynamic nature of a mobile robot. In particular, we increase detection accuracy and efficiency by using a 3D range sensor that allows us to implement a focus of attention mechanism based on geometric and structural information. Furthermore, we use concepts from information theory to propose an adaptive scheme that limits computational load by selectively guiding the search for informative objects. The operation of this scheme is facilitated by the dynamic nature of a mobile robot that is constantly changing its field of view. We test our approach using real data captured by a mobile robot navigating in Office and home environments. Our results indicate that the proposed approach outperforms several state-of-the-art techniques for scene recognition.  相似文献   

2.
《Advanced Robotics》2013,27(13):1565-1582
Autonomous agents that act in the real world utilizing sensory input greatly rely on the ability to plan their actions and to transfer these skills across tasks. The majority of path-planning approaches for mobile robots, however, solve the current navigation problem from scratch, given the current and goal configuration of the robot. Consequently, these approaches yield highly efficient plans for the specific situation, but the computed policies typically do not transfer to other, similar tasks. In this paper, we propose to apply techniques from statistical relational learning to the path-planning problem. More precisely, we propose to learn relational decision trees as abstract navigation strategies from example paths. Relational abstraction has several interesting and important properties. First, it allows a mobile robot to imitate navigation behavior shown by users or by optimal policies. Second, it yields comprehensible models of behavior. Finally, a navigation policy learned in one environment naturally transfers to unknown environments. In several experiments with real robots and in simulated runs, we demonstrate that our approach yields efficient navigation plans. We show that our system is robust against observation noise and can outperform hand-crafted policies.  相似文献   

3.
We address the issue of human–robot cohabitation in smart environments. In particular, the presence of humans in a robot’s work space has a profound influence on how the latter should plan its actions. We propose the use of human-aware planning, an approach in which the robot exploits the capabilities of a sensor-rich environment to obtain information about the (current and future) activities of the people in the environment, and plans its tasks accordingly.Here, we formally describe the planning problem behind our approach, we analyze its complexity and we detail the algorithm of our planner. We then show two application scenarios that could benefit from the techniques described. The first scenario illustrates the applicability of human-aware planning in a domestic setting, while the second one illustrates its use for a robotic helper in a hospital. Finally, we present a five hour-long test run in a smart home equipped with real sensors, where a cleaning robot has been deployed and where a human subject is acting. This test run in a real setting is meant to demonstrate the feasibility of our approach to human–robot interaction.  相似文献   

4.
The application of robotics to a mobility aid for the elderly blind   总被引:3,自引:0,他引:3  
In this paper we describe a novel application of mobile robot technology to the construction of a mobility for the frail blind. The robot mobility aid discussed in this paper physically supports the person walking behind it and provides obstacle avoidance to ensure safer travel. As in all Assistive Technology projects, a clear understanding of the user's needs is vital and we summarise the main user requirements for our device. We then describe the mechanical design, the user interface, the software and hardware architectures of our robot. We describe the results of evaluations carried out by both mobility experts and users and finally we outline our plans for further development.  相似文献   

5.
In this paper, we describe development of a mobile robot which does unsupervised learning for recognizing an environment from action sequences. We call this novel recognition approach action-based environment modeling (AEM). Most studies on recognizing an environment have tried to build precise geometric maps with high sensitive and global sensors. However such precise and global information may be hardly obtained in a real environment, and may be unnecessary to recognize an environment. Furthermore unsupervised-learning is necessary for recognition in an unknown environment without help of a teacher. Thus we attempt to build a mobile robot which does unsupervised-learning to recognize environments with low sensitive and local sensors. The mobile robot is behavior-based and does wall-following in enclosures (called rooms). Then the sequences of actions executed in each room are transformed into environment vectors for self-organizing maps. Learning without a teacher is done, and the robot becomes able to identify rooms. Moreover, we develop a method to identify environments independent of a start point using a partial sequence. We have fully implemented the system with a real mobile robot, and made experiments for evaluating the ability. As a result, we found out that the environment recognition was done well and our method was adaptive to noisy environments.  相似文献   

6.
Model-free execution monitoring in behavior-based robotics.   总被引:1,自引:0,他引:1  
In the near future, autonomous mobile robots are expected to help humans by performing service tasks in many different areas, including personal assistance, transportation, cleaning, mining, or agriculture. In order to manage these tasks in a changing and partially unpredictable environment without the aid of humans, the robot must have the ability to plan its actions and to execute them robustly and safely. The robot must also have the ability to detect when the execution does not proceed as planned and to correctly identify the causes of the failure. An execution monitoring system allows the robot to detect and classify these failures. Most current approaches to execution monitoring in robotics are based on the idea of predicting the outcomes of the robot's actions by using some sort of predictive model and comparing the predicted outcomes with the observed ones. In contrary, this paper explores the use of model-free approaches to execution monitoring, that is, approaches that do not use predictive models. In this paper, we show that pattern recognition techniques can be applied to realize model-free execution monitoring by classifying observed behavioral patterns into normal or faulty execution. We investigate the use of several such techniques and verify their utility in a number of experiments involving the navigation of a mobile robot in indoor environments.  相似文献   

7.
In the current article, we address the problem of constructing radiofrequency identification (RFID)-augmented environments for mobile robots and the issues related to creating user interfaces for efficient remote navigation with a mobile robot in such environments. First, we describe an RFID-based positioning and obstacle identification solution for remotely controlled mobile robots in indoor environments. In the robot system, an architecture specifically developed by the authors for remotely controlled robotic systems was tested in practice. Second, using the developed system, three techniques for displaying information about the position and movements of a remote robot to the user were compared. The experimental visualization techniques displayed the position of the robot on an indoor floor plan augmented with (1) a video view from a camera attached to the robot, (2) display of nearby obstacles (identified using RFID technology) on the floor plan, and (3) both features. In the experiment, test subjects controlled the mobile robot through predetermined routes as quickly as possible avoiding collisions. The results suggest that the developed RFID-based environment and the remote control system can be used for efficient control of mobile robots. The results from the comparison of the visualization techniques showed that the technique without a camera view (2) was the fastest, and the number of steering motions made was smallest using this technique, but it also had the highest need for physical human interventions. The technique with both additional features (3) was subjectively preferred by the users. The similarities and differences between the current results and those found in the literature are discussed.  相似文献   

8.
We present a robot-assisted wayfinding system for the visually impaired in structured indoor environments. The system consists of a mobile robotic guide and small passive RFID sensors embedded in the environment. The system is intended for use in indoor environments, such as office buildings, supermarkets and airports. We describe how the system was deployed in two indoor environments and evaluated by visually impaired participants in a series of pilot experiments. We analyze the system’s successes and failures and outline our plans for future research and development.  相似文献   

9.
《Advanced Robotics》2013,27(9):925-950
Considering that intelligent robotic systems work in a real environment, it is important that they themselves have the ability to determine their own internal conditions. Therefore, we consider it necessary to pay some attention to the diagnosis of such intelligent systems and to construct a system for the self-diagnosis of an autonomous mobile robot. Autonomous mobile systems must have a self-contained diagnostic system and therefore there are restrictions to building such a system on a mobile robot. In this paper, we describe an internal state sensory system and a method for diagnosing conditions in an autonomous mobile robot. The prototype of our internal sensory system consists of voltage sensors, current sensors and encoders. We show experimental results of the diagnosis using an omnidirectional mobile robot and the developed system. Also, we propose a method that is able to cope with the internal condition using internal sensory information. We focus on the functional units in a single robot system and also examine a method in which the faulty condition is categorized into three levels. The measures taken to cope with the faulty condition are set for each level to enable the robot to continue to execute the task. We show experimental results using an omnidirectional mobile robot with a self-diagnosis system and our proposed method.  相似文献   

10.
Test Case Generation as an AI Planning Problem   总被引:6,自引:0,他引:6  
While Artificial Intelligence techniques have been applied to a variety of software engineering applications, the area of automated software testing remains largely unexplored. Yet, test cases for certain types of systems (e.g., those with command language interfaces and transaction based systems) are similar to plans. We have exploited this similarity by constructing an automated test case generator with an AI planning system at its core. We compared the functionality and output of two systems, one based on Software Engineering techniques and the other on planning, for a real application: the StorageTek robot tape library command language. From this, we showed that AI planning is a viable technique for test case generation and that the two approaches are complementary in their capabilities.  相似文献   

11.
Two major themes of our research include the creation of mobile robot systems that are robust and adaptive in rapidly changing environments, and the view of integration as a basic research issue. Where reasonable, we try to use the same representations to allow different components to work more readily together and to allow better and more natural integration of and communication between these components. In this paper, we describe our most recent work in integrating mobile robot exploration, localization, navigation, and planning through the use of a common representation, evidence grids.  相似文献   

12.
This paper describes a collection of optimization algorithms for achieving dynamic planning, control, and state estimation for a bipedal robot designed to operate reliably in complex environments. To make challenging locomotion tasks tractable, we describe several novel applications of convex, mixed-integer, and sparse nonlinear optimization to problems ranging from footstep placement to whole-body planning and control. We also present a state estimator formulation that, when combined with our walking controller, permits highly precise execution of extended walking plans over non-flat terrain. We describe our complete system integration and experiments carried out on Atlas, a full-size hydraulic humanoid robot built by Boston Dynamics, Inc.  相似文献   

13.
We describe a hybrid expert diagnosis-advisory system developed for small and medium enterprises. The Performance, Development, Growth (PDG) system is completely implemented and fully operational, and has been successfully used on real-world data from SMEs for several years. Although our system contains a great deal of the domain knowledge and expertise that is a hallmark of AI systems, it was not designed using the symbolic techniques traditionally used to implement such systems. We explain why this is so and discuss how the PDG system relates to expert systems, decision support systems, and general applications in AI. We also present an experimental evaluation of the system and identify developments currently under way and our plans for the future.  相似文献   

14.
《Advanced Robotics》2013,27(1-2):209-232
We describe an implementation integrating a complete spoken dialogue system with a mobile robot, which a human can direct to specific locations, ask for information about its status and supply information about its environment. The robot uses an internal map for navigation, and communicates its current orientation and accessible locations to the dialogue system using a topological map as interface. We focus on linguistic and inferential aspects of the human–robot communication process. The result is a novel approach using a principled semantic theory combined with techniques from automated deduction applied to a mobile robot platform. Due to the abstract level of the dialogue system, it is easily portable to other environments or applications.  相似文献   

15.
In this paper we show our work on the use of fuzzy behaviors in the field of autonomous mobile robots. We address here how we use learning techniques to efficiently coordinate the conflicts between the different behaviors that compete with each other to take control of the robot. We use fuzzy rules to perform such fusion. These rules can be set using expert knowledge, but as this can be a complex task, we show how to automatically define them using genetic algorithms. We also describe the working environment, which includes a custom programming language (named BG) based on the multi‐agent paradigm. Finally, some results related to simple goods‐delivery tasks in an unknown environment are presented. © 2002 John Wiley & Sons, Inc.  相似文献   

16.
Localization is a key issue for a mobile robot, in particular in environments where a globally accurate positioning system, such as GPS, is not available. In these environments, accurate and efficient robot localization is not a trivial task, as an increase in accuracy usually leads to an impoverishment in efficiency and viceversa. Active perception appears as an appealing way to improve the localization process by increasing the richness of the information acquired from the environment. In this paper, we present an active perception strategy for a mobile robot provided with a visual sensor mounted on a pan-tilt mechanism. The visual sensor has a limited field of view, so the goal of the active perception strategy is to use the pan-tilt unit to direct the sensor to informative parts of the environment. To achieve this goal, we use a topological map of the environment and a Bayesian non-parametric estimation of robot position based on a particle filter. We slightly modify the regular implementation of this filter by including an additional step that selects the best perceptual action using Monte Carlo estimations. We understand the best perceptual action as the one that produces the greatest reduction in uncertainty about the robot position. We also consider in our optimization function a cost term that favors efficient perceptual actions. Previous works have proposed active perception strategies for robot localization, but mainly in the context of range sensors, grid representations of the environment, and parametric techniques, such as the extended Kalman filter. Accordingly, the main contributions of this work are: i) Development of a sound strategy for active selection of perceptual actions in the context of a visual sensor and a topological map; ii) Real time operation using a modified version of the particle filter and Monte Carlo based estimations; iii) Implementation and testing of these ideas using simulations and a real case scenario. Our results indicate that, in terms of accuracy of robot localization, the proposed approach decreases mean average error and standard deviation with respect to a passive perception scheme. Furthermore, in terms of efficiency, the active scheme is able to operate in real time without adding a relevant overhead to the regular robot operation.  相似文献   

17.
Evacuation leaders and/or equipment provide route and exit information for people and guide them to the expected destinations, which could make crowd evacuation more efficient in case of emergency. The purpose of this paper is to provide an overview of recent advances in guided crowd evacuation. Different guided crowd evacuation approaches are classified according to guidance approaches and technologies. A comprehensive analysis and comparison of crowd evacuation with static signage, dynamic signage, trained leader, mobile devices, mobile robot and wireless sensor networks are presented based on a single guidance mode perspective. In addition, the different evacuation guidance systems that use high-tech means such as advanced intelligent monitoring techniques, AI techniques, computer technology and intelligent inducing algorithms are reviewed from a system’s perspective. Future researches in the area of crowd evacuation are also discussed.   相似文献   

18.
Abstract

We describe a novel agent architecture based on the idea that cognition is imagined interaction, i.e. that cognitive tasks are performed by interacting with an imaginary world. We demonstrate the architecture by its application to a subsumption-based mobile robot. The robot's interactive abilities include exploration of an environment and goal-directed navigation within a previously explored environment. Imagination enables the robot to read and make use of maps, allowing it to reason about unfamiliar environments as well.  相似文献   

19.
Vector field SLAM is a framework for localizing a mobile robot in an unknown environment by learning the spatial distribution of continuous signals such as those emitted by WiFi or active beacons. In our previous work we showed that this approach is capable of keeping a robot localized in small to medium sized areas, e.g. in a living room, where four continuous signals of an active beacon are measured (Gutmann et al., 2012). In this article we extend the method to larger environments up to the size of a complete home by deploying more signal sources for covering the expanded area. We first analyze the complexity of vector field SLAM with respect to area size and number of signals and then describe an approximation that divides the localization map into decoupled sub-maps to keep memory and run-time requirements low. We also describe a method for re-localizing the robot in a vector field previously mapped. This enables a robot to resume its navigation after it has been kidnapped or paused and resumed. The re-localization method is evaluated in a standard test environment and shows an average position accuracy of 10 to 35 cm with a localization success rate of 96 to 99%. Additional experimental results from running the system in houses of up to 125 m2 demonstrate the performance of our approach. The presented methods are suitable for commercial low-cost products including robots for autonomous and systematic floor cleaning.  相似文献   

20.
Autonomous navigation in unstructured environments is a complex task and an active area of research in mobile robotics. Unlike urban areas with lanes, road signs, and maps, the environment around our robot is unknown and unstructured. Such an environment requires careful examination as it is random, continuous, and the number of perceptions and possible actions are infinite.We describe a terrain classification approach for our autonomous robot based on Markov Random Fields (MRFs ) on fused 3D laser and camera image data. Our primary data structure is a 2D grid whose cells carry information extracted from sensor readings. All cells within the grid are classified and their surface is analyzed in regard to negotiability for wheeled robots.Knowledge of our robot’s egomotion allows fusion of previous classification results with current sensor data in order to fill data gaps and regions outside the visibility of the sensors. We estimate egomotion by integrating information of an IMU, GPS measurements, and wheel odometry in an extended Kalman filter.In our experiments we achieve a recall ratio of about 90% for detecting streets and obstacles. We show that our approach is fast enough to be used on autonomous mobile robots in real time.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号