首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 353 毫秒
1.
This work addresses the problem of single robot coverage and exploration in an environment with the goal of finding a specific object previously known to the robot. As limited time is a constraint of interest we cannot search from an infinite number of points. Thus, we propose a multi-objective approach for such search tasks in which we first search for a good set of positions to place the robot sensors in order to acquire information from the environment and to locate the desired object. Given the interesting properties of the Generalized Voronoi Diagram, we restrict the candidate search points along this roadmap. We redefine the problem of finding these search points as a multi-objective optimization one. NSGA-II is used as the search engine and ELECTRE I is applied as a decision making tool to decide among the trade-off alternatives. We also solve a Chinese Postman Problem to optimize the path followed by the robot in order to visit the computed search points. Simulation results show a comparison between the solution found by our method and solutions defined by other known approaches. Finally, a real robot experiment indicates the applicability of our method in practical scenarios.  相似文献   

2.
A new method for rotation-invariant template matching in gray scale images is proposed. It is based on the utilization of gradient information in the form of orientation codes as the feature for approximating the rotation angle as well as for matching. Orientation codes-based matching is robust for searching objects in cluttered environments even in the cases of illumination fluctuations resulting from shadowing or highlighting, etc. We use a two-stage framework for realizing the rotation-invariant template matching; in the first stage, histograms of orientation codes are employed for approximating the rotation angle of the object and then in the second stage, matching is performed by rotating the object template by the estimated angle. Matching in the second stage is performed only for the positions which have higher similarity results in the first stage, thereby pruning out insignificant locations to speed up the search. Experiments with real world scenes demonstrate the rotation- and brightness invariance of the proposed method for performing object search.  相似文献   

3.
Mobile robotics has achieved notable progress, however, to increase the complexity of the tasks that mobile robots can perform in natural environments, we need to provide them with a greater semantic understanding of their surrounding. In particular, identifying indoor scenes, such as an Office or a Kitchen, is a highly valuable perceptual ability for an indoor mobile robot, and in this paper we propose a new technique to achieve this goal. As a distinguishing feature, we use common objects, such as Doors or furniture, as a key intermediate representation to recognize indoor scenes. We frame our method as a generative probabilistic hierarchical model, where we use object category classifiers to associate low-level visual features to objects, and contextual relations to associate objects to scenes. The inherent semantic interpretation of common objects allows us to use rich sources of online data to populate the probabilistic terms of our model. In contrast to alternative computer vision based methods, we boost performance by exploiting the embedded and dynamic nature of a mobile robot. In particular, we increase detection accuracy and efficiency by using a 3D range sensor that allows us to implement a focus of attention mechanism based on geometric and structural information. Furthermore, we use concepts from information theory to propose an adaptive scheme that limits computational load by selectively guiding the search for informative objects. The operation of this scheme is facilitated by the dynamic nature of a mobile robot that is constantly changing its field of view. We test our approach using real data captured by a mobile robot navigating in Office and home environments. Our results indicate that the proposed approach outperforms several state-of-the-art techniques for scene recognition.  相似文献   

4.
This paper presents a novel object–object affordance learning approach that enables intelligent robots to learn the interactive functionalities of objects from human demonstrations in everyday environments. Instead of considering a single object, we model the interactive motions between paired objects in a human–object–object way. The innate interaction-affordance knowledge of the paired objects are learned from a labeled training dataset that contains a set of relative motions of the paired objects, human actions, and object labels. The learned knowledge is represented with a Bayesian Network, and the network can be used to improve the recognition reliability of both objects and human actions and to generate proper manipulation motion for a robot if a pair of objects is recognized. This paper also presents an image-based visual servoing approach that uses the learned motion features of the affordance in interaction as the control goals to control a robot to perform manipulation tasks.  相似文献   

5.
In this paper, we present visibility-based spatial reasoning techniques for real-time object manipulation in cluttered environments. When a robot is requested to manipulate an object, a collision-free path should be determined to access, grasp, and move the target object. This often requires processing of time-consuming motion planning routines, making real-time object manipulation difficult or infeasible, especially in a robot with a high DOF and/or in a highly cluttered environment. This paper places special emphasis on developing real-time motion planning, in particular, for accessing and removing an object in a cluttered workspace, as a local planner that can be integrated with a general motion planner for improved overall efficiency. In the proposed approach, the access direction of the object to grasp is determined through visibility query, and the removal direction to retrieve the object grasped by the gripper is computed using an environment map. The experimental results demonstrate that the proposed approach, when implemented by graphics hardware, is fast and robust enough to manipulate 3D objects in real-time applications.  相似文献   

6.
An important problem in tracking methods is how to manage the changes in object appearance, such as illumination changes, partial/full occlusion, scale, and pose variation during the tracking process. In this paper, we propose an occlusion free object tracking method together with a simple adaptive appearance model. The proposed appearance model which is updated at the end of each time step includes three components: the first component consists of a fixed template of target object, the second component shows rapid changes in object appearance, and the third one maintains slow changes generated along the object path. The proposed tracking method not only can detect occlusion and handle it, but also it is robust against changes in the object appearance model. It is based on particle filter which is a robust technique in tracking and handles non-linear and non-Gaussian problems. We have also employed a meta-heuristic approach that is called Modified Galaxy based Search Algorithm (MGbSA), to reinforce finding the optimum state in the particle filter state space. The proposed method was applied to some benchmark videos and its results were satisfactory and better than results of related works.  相似文献   

7.
If robots are to assume their long anticipated place by humanity’s side and be of help to us in our partially structured environments, we believe that adopting human-like cognitive patterns will be valuable. Such environments are the products of human preferences, activity and thought; they are imbued with semantic meaning. In this paper we investigate qualitative spatial relations with the aim of both perceiving those semantics, and of using semantics to perceive. More specifically, in this paper we introduce general perceptual measures for two common topological spatial relations, “on” and “in”, that allow a robot to evaluate object configurations, possible or actual, in terms of those relations. We also show how these spatial relations can be used as a way of guiding visual object search. We do this by providing a principled approach for indirect search in which the robot can make use of known or assumed spatial relations between objects, significantly increasing the efficiency of search by first looking for an intermediate object that is easier to find. We explain our design, implementation and experimental setup and provide extensive experimental results to back up our thesis.  相似文献   

8.
We present two schemes for planning the time-optimal trajectory for cooperative multi-manipulator system (CMMS) carrying a common object. We assume that the desired path is given and parameterizable by an arclength variable. Both approaches take into account the dynamics of the manipulators and object. The first approach employs linear programming techniques, and it allows us to obtain the time-optimal execution of the given task utilizing the maximum torque capacities of the joint motors. The second approach is a sub-time-optimal method that is computationally very efficient. In the second approach the given load is divided into a share for each robot in the CMMS in a manner in which the trajectory acceleration/deceleration is maximized, hence the trajectory execution time is minimized. This load distribution approach uses optimization schemes that degenerate to a linear search algorithm for the case of two robots manipulating a common load, and this results in significant reduction of computation time. The load distribution scheme not only enables us to reduce the computation time, but also gives us the possibility of applying this method in real-time planning and control of CMMS. Further, we show that for certain object trajectories the load distribution scheme yields truly time-optimal trajectories.  相似文献   

9.
10.
In this article, we present an integrated manipulation framework for a service robot, that allows to interact with articulated objects at home environments through the coupling of vision and force modalities. We consider a robot which is observing simultaneously his hand and the object to manipulate, by using an external camera (i.e. robot head). Task-oriented grasping algorithms (Proc of IEEE Int Conf on robotics and automation, pp 1794–1799, 2007) are used in order to plan a suitable grasp on the object according to the task to perform. A new vision/force coupling approach (Int Conf on advanced robotics, 2007), based on external control, is used in order to, first, guide the robot hand towards the grasp position and, second, perform the task taking into account external forces. The coupling between these two complementary sensor modalities provides the robot with robustness against uncertainties in models and positioning. A position-based visual servoing control law has been designed in order to continuously align the robot hand with respect to the object that is being manipulated, independently of camera position. This allows to freely move the camera while the task is being executed and makes this approach amenable to be integrated in current humanoid robots without the need of hand-eye calibration. Experimental results on a real robot interacting with different kind of doors are presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号