首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A vision-based control methodology is presented in this paper that can perform accurate, three-dimensional (3D), positioning and path-tracking tasks. Tested with the challenging manufacturing task of welding in an unstructured environment, the proposed methodology has proven to be highly reliable, consistently achieving terminal precision of 1 mm. A key limiting factor for this high precision is camera–space resolution per unit physical space. This paper also presents a means of preserving and even increasing this ratio over a large region of the robot's workspace by using data from multiple vision sensors.In the experiments reported in this paper, a laser is used to facilitate the image processing aspect of the vision-based control strategy. The laser projects “laser spots” over the workpiece in order to gather information about the workpiece geometry. Previous applications of the control method were limited to considering only local, geometric information of the workpiece, close to the region where the robot's tool is going to be placed. This paper presents a methodology to consider all available information about the geometry of the workpiece. This data is represented in a compact matrix format that is used within the algorithm to evaluate an optimal robot configuration. The proposed strategy processes and stores the information that comes from various vision sensors in an efficient manner.An important goal of the proposed methodology is to facilitate the use of industrial robots in unstructured environments. A graphical-user-interface (GUI) has been developed that simplifies the use of the robot/vision system. With this GUI, complex tasks such as welding can be successfully performed by users with limited experience in the control of robots and welding techniques.  相似文献   

2.
With the advent of mobile robots and inboard vision sensors mounted directly on the robot's wrist, new kinds of problems lie in the image processing field as, for example, dynamic scene analysis or motion estimation. The lack of flexibility of real experiments led us to implement at IRISA a general simulation tool devoted to the study of robots using moving vision sensors. VISYR allows us to simulate the image perceived by a robot of its environment during its motion.The first part of the paper is devoted to the modelling of the 3D scene containing complex objects and to the design of a suitable robotics vision sensor. In the second part, a new algorithm of dynamic management of the local data basis perceived by the sensor is presented. The parameters of the vision sensors are highly adjustable and VISYR is conceived to allow the fast development of algorithms using dynamic vision data.  相似文献   

3.
This paper presents a method for estimating position and orientation of multiple robots from a set of azimuth angles of landmarks and other robots which are observed by multiple omnidirectional vision sensors. Our method simultaneously performs self-localization by each robot and reconstruction of a relative configuration between robots. Even if it is impossible to identify correspondence between each index of the observed azimuth angles and those of the robots, our method can reconstruct not only a relative configuration between robots using `triangle and enumeration constraints' but also an absolute one using the knowledge of landmarks in the environment. In order to show the validity of our method, this method is applied to multiple mobile robots each of which has an omnidirectional vision sensor in simulation and the real environment. The experimental results show that the result of our method is more precise and stabler than that of self-localization by each robot and our method can handle the combinatorial explosion problem. Correspondence to:T. Nakamura (e-mail: ntakayuk@sys.wakayama-u.ac.jp)  相似文献   

4.
This paper surveys fitness functions used in the field of evolutionary robotics (ER). Evolutionary robotics is a field of research that applies artificial evolution to generate control systems for autonomous robots. During evolution, robots attempt to perform a given task in a given environment. The controllers in the better performing robots are selected, altered and propagated to perform the task again in an iterative process that mimics some aspects of natural evolution. A key component of this process–one might argue, the key component–is the measurement of fitness in the evolving controllers. ER is one of a host of machine learning methods that rely on interaction with, and feedback from, a complex dynamic environment to drive synthesis of controllers for autonomous agents. These methods have the potential to lead to the development of robots that can adapt to uncharacterized environments and which may be able to perform tasks that human designers do not completely understand. In order to achieve this, issues regarding fitness evaluation must be addressed. In this paper we survey current ER research and focus on work that involved real robots. The surveyed research is organized according to the degree of a priori knowledge used to formulate the various fitness functions employed during evolution. The underlying motivation for this is to identify methods that allow the development of the greatest degree of novel control, while requiring the minimum amount of a priori task knowledge from the designer.  相似文献   

5.
针对室内复杂的非结构化环境和机器人动态变化的服务任务,提出基于快速识读码(QR code)技术的室内环境空间认知手段。在双目视觉获得深度信息的前提下,基于DSmT证据理论构建信息不确定数学模型,形成描述体素占有/空闲概率的三维栅格地图。在构建三维地图的同时,利用粘贴在大物品上的基于QR code技术的人工物标,为环境中的大物品添加语义标签,并基于大物品的尺寸更新对应的体素占空值,形成含大物品功能属性和归属关系描述的三维栅格语义地图。通过实验与其它信息融合算法进行对比,并对人工物标的识读准确性进行分析,证明该方法的有效性和可行性。  相似文献   

6.
We consider the problem of exploring an anonymous unoriented ring by a team of k identical, oblivious, asynchronous mobile robots that can view the environment but cannot communicate. This weak scenario is standard when the spatial universe in which the robots operate is the two-dimensional plane, but (with one exception) has not been investigated before for networks. Our results imply that, although these weak capabilities of robots render the problem considerably more difficult, ring exploration by a small team of robots is still possible. We first show that, when k and n are not co-prime, the problem is not solvable in general, e.g., if k divides n there are initial placements of the robots for which gathering is impossible. We then prove that the problem is always solvable provided that n and k are co-prime, for k≥17, by giving an exploration algorithm that always terminates, starting from arbitrary initial configurations. Finally, we consider the minimum number ρ(n) of robots that can explore a ring of size n. As a consequence of our positive result we show that ρ(n) is O(logn). We additionally prove that Ω(logn) robots are necessary for infinitely many n.  相似文献   

7.
We consider the problem of exploring an anonymous line by a team of k identical, oblivious, asynchronous deterministic mobile robots that can view the environment but cannot communicate. We completely characterize sizes of teams of robots capable of exploring an n-node line. For k<n, exploration by k robots turns out to be possible, if and only if either k=3, or k?5, or k=4 and n is odd. For all values of k for which exploration is possible, we give an exploration algorithm. For all others, we prove an impossibility result.  相似文献   

8.
Biologically-inspired event-driven silicon retinas, so called dynamic vision sensors (DVS), allow efficient solutions for various visual perception tasks, e.g. surveillance, tracking, or motion detection. Similar to retinal photoreceptors, any perceived light intensity change in the DVS generates an event at the corresponding pixel. The DVS thereby emits a stream of spatiotemporal events to encode visually perceived objects that in contrast to conventional frame-based cameras, is largely free of redundant background information. The DVS offers multiple additional advantages, but requires the development of radically new asynchronous, event-based information processing algorithms. In this paper we present a fully event-based disparity matching algorithm for reliable 3D depth perception using a dynamic cooperative neural network. The interaction between cooperative cells applies cross-disparity uniqueness-constraints and within-disparity continuity-constraints, to asynchronously extract disparity for each new event, without any need of buffering individual events. We have investigated the algorithm’s performance in several experiments; our results demonstrate smooth disparity maps computed in a purely event-based manner, even in the scenes with temporally-overlapping stimuli.  相似文献   

9.
In this paper, a novel heuristic algorithm is proposed to solve continuous non-linear optimization problems. The presented algorithm is a collective global search inspired by the swarm artificial intelligent of coordinated robots. Cooperative recognition and sensing by a swarm of mobile robots have been fundamental inspirations for development of Swarm Robotics Search & Rescue (SRSR). Swarm robotics is an approach with the aim of coordinating multi-robot systems which consist of numbers of mostly uniform simple physical robots. The ultimate aim is to emerge an eligible cooperative behavior either from interactions of autonomous robots with the environment or their mutual interactions between each other. In this algorithm, robots which represent initial solutions in SRSR terminology have a sense of environment to detect victim in a search & rescue mission at a disaster site. In fact, victim’s location refers to global best solution in SRSR algorithm. The individual with the highest rank in the swarm is called master and remaining robots will play role of slaves. However, this leadership and master position can be transitioned from one robot to another one during mission. Having the supervision of master robot accompanied with abilities of slave robots for sensing the environment, this collaborative search assists the swarm to rapidly find the location of victim and subsequently a successful mission. In order to validate effectiveness and optimality of proposed algorithm, it has been applied on several standard benchmark functions and a practical electric power system problem in several real size cases. Finally, simulation results have been compared with those of some well-known algorithms. Comparison of results demonstrates superiority of presented algorithm in terms of quality solutions and convergence speed.  相似文献   

10.
The fields of ambient intelligence, distributed robotics and wireless sensor networks are converging toward a common vision, in which ubiquitous sensing and acting devices cooperate to provide useful services in the home. These devices can range from sophisticated mobile robots to simple sensor nodes and even simpler tagged everyday objects. In this vision, a milkbox left on the table after the user has left the home could ask the service of a mobile robot to be placed back in the refrigerator. A missing ingredient to realize this vision is a mechanism that enables the communication and interoperation among such highly heterogeneous entities. In this paper, we propose such a mechanism in the form of a middleware able to integrate robots, tiny devices and augmented everyday objects into one and the same system. The key moves to cope with heterogeneity are: the definition of a tiny, compatible version of the middleware, that can run on small devices; and the concept of object proxy, used to make everyday object accessible within the middleware. We describe the concepts and implementation of our middleware, and show a number of experiments that illustrate its performance.  相似文献   

11.
There is a bottleneck of mobile robots positioning technologies for uncertain goals in complex field environment. Owing to the disturbance of the environment, the objects are hard to be located precisely by robot manipulator. Aiming at the positioning problem, binocular stereo vision system and positioning principle of the picking manipulator in virtual environment (VE) were proposed and expatiated upon; in addition, the manipulator positioning model was built in VE, and the manipulator positioning simulation system was developed by Microsoft Visual C++ 6.0; and the binocular stereo vision system platform with a three-coordinate guideway positioning was constructed for test; what’s more the error sources of vision positioning system was analyzed, and the camera system error was established; the mathematical model of experimental error and the camera calibration matching error were also found; with the developed robot manipulator positioning simulation software and vision system hardware, an experimental platform of positioning system was constructed, and using the platform, the stereo vision data was mapped to the manipulator and was guiding the accurate positioning in VE. Finally, experiment of positioning error compensation was carried out. Results of simulation in VE and the experiment showed that the vision positioning method was feasible for positioning in the field environment; it can be applied to control robot operation and to correct the positioning errors in real-time, especially to the long-range precision modelling and error compensation of robots.  相似文献   

12.
In an environment where robots coexist with humans, mobile robots should be human-aware and comply with humans' behavioural norms so as to not disturb humans' personal space and activities. In this work, we propose an inverse reinforcement learning-based time-dependent A* planner for human-aware robot navigation with local vision. In this method, the planning process of time-dependent A* is regarded as a Markov decision process and the cost function of the time-dependent A* is learned using the inverse reinforcement learning via capturing humans' demonstration trajectories. With this method, a robot can plan a path that complies with humans' behaviour patterns and the robot's kinematics. When constructing feature vectors of the cost function, considering the local vision characteristics, we propose a visual coverage feature for enabling robots to learn from how humans move in a limited visual field. The effectiveness of the proposed method has been validated by experiments in real-world scenarios: using this approach robots can effectively mimic human motion patterns when avoiding pedestrians; furthermore, in a limited visual field, robots can learn to choose a path that enables them to have the larger visual coverage which shows a better navigation performance.  相似文献   

13.
This work presents a novel approach to the problem of establishing and maintaining a common co-ordinate system for a group of robots. A camera system mounted on top of a robot and vision algorithms are used to calculate the relative position of each surrounding robot. The watched movement of each robot is compared to the reported movement which is sent over some communication link. From this comparison a co-ordinate transformation is calculated. The algorithm was tested in simulation and is at the moment being implemented on a real robot system. Preliminary results of real world experiments are being presented.  相似文献   

14.
Many public facility layouts have been developed with little consideration of the visually impaired, producing difficult and unpleasant wayfinding experiences. Not all wayfinding elements can be applied universally to all environments; several wayfinding elements are specific to the type of industry being considered. No known research has been conducted within healthcare systems to find wayfinding limitations among visually impaired users during the navigation process. The purpose of this study was to analyze the current issues in a wayfinding task for the visually impaired and normally sighted to identify wayfinding design deficits. Normally-sighted participants (m = 25, f = 25) wore one of five different vision simulator goggles to simulate a specific visual impairment (diabetic retinopathy, glaucoma, cataracts, macular degeneration, and hemianopsia) and were then given directions how to get to specific series of departments within a hospital campus. Participants then navigated a second time (using a different, but similar series of paths) without the vision simulator goggles (normal vision) so comparisons could be made. During participant wayfinding, behaviors such as stopping, looking around, touching walls, becoming lost and/or confused were recorded by location of each instance on a map. Questionnaires asking about the surrounding environment were completed after each condition. The results of this study identified several design elements involving signage, paths/target sites, lighting and flooring that created wayfinding issues for both experimental conditions. The effects of the wayfinding issues on participants ranged from tripping to becoming lost in the surrounding environment. Enhancing wayfinding for the most highly visually impacted individuals may also improve wayfinding for those with normal vision via universal design. The hospital design flaws identified by this study provide key areas and elements (not previously investigated) for further research studies to analyze more comprehensively and ultimately provide sound design recommendations to enhance effective wayfinding.

Relevance to the industry

This paper offers information relevant to a growing healthcare sector facing an aging population with growing needs. Applying organizational, architectural and design principles from this paper can lead to improved patient satisfaction, safety and patient flow within the hospital setting for the visually impaired and others without visual impairment.  相似文献   

15.
In this article, we present an integrated manipulation framework for a service robot, that allows to interact with articulated objects at home environments through the coupling of vision and force modalities. We consider a robot which is observing simultaneously his hand and the object to manipulate, by using an external camera (i.e. robot head). Task-oriented grasping algorithms (Proc of IEEE Int Conf on robotics and automation, pp 1794–1799, 2007) are used in order to plan a suitable grasp on the object according to the task to perform. A new vision/force coupling approach (Int Conf on advanced robotics, 2007), based on external control, is used in order to, first, guide the robot hand towards the grasp position and, second, perform the task taking into account external forces. The coupling between these two complementary sensor modalities provides the robot with robustness against uncertainties in models and positioning. A position-based visual servoing control law has been designed in order to continuously align the robot hand with respect to the object that is being manipulated, independently of camera position. This allows to freely move the camera while the task is being executed and makes this approach amenable to be integrated in current humanoid robots without the need of hand-eye calibration. Experimental results on a real robot interacting with different kind of doors are presented.  相似文献   

16.
In recent years, due to the emergence of ubiquitous computing technology, a new class of networked robots called ubiquitous robots has been introduced. The Ubiquitous Robotic Companion (URC) is our conceptual vision of ubiquitous service robots that provides its user with the services the user needs, anytime and anywhere, in the ubiquitous computing environments. There are requirements to be met for the vision of URC. One of the essential requirements is that the robotic systems must support ubiquity of services. This means that a robot service must always be available even though there are changes in the service environment. More specifically, a robotic system needs to be interoperable with sensors and devices in its current service environments automatically, rather than statically pre-programmed for its environment. In this paper, the design and implementation of an infrastructure for URC called Ubiquitous Robotic Service Framework (URSF) is presented. URSF enables automated integration of networked robots in a ubiquitous computing environment by the use of Semantic Web Services Technologies.  相似文献   

17.
Coordinated multirobot exploration involves autonomous discovering and mapping of the features of initially unknown environments by using multiple robots. Autonomously exploring mobile robots are usually driven, both in selecting locations to visit and in assigning them to robots, by knowledge of the already explored portions of the environment, often represented in a metric map. In the literature, some works addressed the use of semantic knowledge in exploration, which, embedded in a semantic map, associates spatial concepts (like ‘rooms’ and ‘corridors’) with metric entities, showing its effectiveness in improving the total area explored by robots. In this paper, we build on these results and propose a system that exploits semantic information to push robots to explore relevant areas of initially unknown environments, according to a priori information provided by human users. Discovery of relevant areas is significant in some search and rescue settings, in which human rescuers can instruct robots to search for victims in specific areas, for example in cubicles if a disaster happened in an office building during working hours. We propose to speed up the exploration of specific areas by using semantic information both to select locations to visit and to determine the number of robots to allocate to those locations. In this way, for example, more robots could be assigned to a candidate location in a corridor, so the attached rooms can be explored faster. We tested our semantic-based multirobot exploration system within a reliable robot simulator and we evaluated its performance in realistic search and rescue indoor settings with respect to state-of-the-art approaches.  相似文献   

18.
Intelligent autonomous mobile robots must be able to sense and recognize 3D indoor space where they live or work. However, robots are frequently situated in cluttered environments with various objects hard to be robustly perceived. Although the monocular and binocular vision sensors have been widely used for mobile robots, they suffer from image intensity variations, insufficient feature information and correspondence problems. In this paper, we propose a new 3D sensing system, in which the laser-structured-lighting method is basically utilized because of the robustness on the nature of the navigation environment and the easy extraction of feature information of interest. The proposed active trinocular vision system is composed of the flexible multi-stripe laser projector and two cameras arranged with a triangular shape. By modeling the laser projector as a virtual camera and using the trinocular epipolar constraints, the matching pairs of line features observed into two real camera images are established, and 3D information from one-shot image can be extracted on the patterned scene. For robust feature matching, here we propose a new correspondence matching technique based on line grouping and probabilistic voting. Finally, a series of experimental tests is performed to show the simplicity, efficiency, and accuracy of this proposed sensor system for 3D environment sensing and recognition.  相似文献   

19.
In this paper we present a solution for merging feature-based maps in a robotic network with limited communication. We consider a team of robots that explore an unknown environment and build local stochastic maps of the explored region. After the exploration has taken place, the robots communicate and build a global map of the environment. This problem has been traditionally addressed using centralized schemes or broadcasting methods. The contribution of this work is the design of a fully distributed approach which is implementable in scenarios with limited communication. Our solution does not rely on a particular communication topology and does not require any central agent, making the system robust to individual failures. Information is exchanged exclusively between neighboring robots in the communication graph. We provide distributed algorithms for solving the three main issues associated to a map merging scenario: establishing a common reference frame, solving the data association, and merging the maps. We also give worst-case performance bounds for computational complexity, memory usage, and communication load. Simulations and real experiments carried out using various vision sensors validate our results.  相似文献   

20.
We address the problem of propagating a piece of information among robots scattered in an environment. Initially, a single robot has the information. This robot searches for other robots to pass it along. When a robot is discovered, it can participate in the process by searching for other robots. Since our motivation for studying this problem is to form an ad hoc network, we call it the Network Formation Problem. In this paper, we study the case where the environment is a rectangle and the robots’ locations are unknown but chosen uniformly at random. We present an efficient network formation algorithm, Stripes, and show that its expected performance is within a logarithmic factor of the optimal performance. We also compare Stripes with an intuitive network formation algorithm in simulations. The feasibility of Stripes is demonstrated with a proof-of-concept implementation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号