首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
基于目标导向行为和空间拓扑记忆的视觉导航方法   总被引:1,自引:0,他引:1  
针对在具有动态因素且视觉丰富环境中的导航问题,受路标机制空间记忆方式启发,提出一种可同步学习目标导向行为和记忆空间结构的视觉导航方法.首先,为直接从原始输入中学习控制策略,以深度强化学习为基本导航框架,同时添加碰撞预测作为模型辅助任务;然后,在智能体学习导航过程中,利用时间相关性网络祛除冗余观测及寻找导航节点,实现通过...  相似文献   

2.
In swarm robotics, it is necessary to develop methods and strategies that guide the collective execution of tasks by the robots. The design of such tasks can be done considering it as a collection of simpler behaviors, called subtasks. In this paper, the Wave Swarm is presented as a general strategy to manage the sequence of subtasks that compose the collective navigation, which is an important task in swarm robotics. The proposed strategy is based mainly on the execution of wave algorithms. The swarm is viewed as a distributed system, wherein the communication is achieved by message passing among robot's neighborhood. Message propagation delimits the start and end of each subtask. Simulations are performed to demonstrate that controlled navigation of robot swarms/clusters is achieved with three subtasks, which are recruitment, alignment and movement. The performance of the proposed strategy regarding the time spent in the two first subtasks is analyzed. Furthermore, the simulations of the navigation in different environments including obstacles are presented and discussed.  相似文献   

3.
Agent任务可视编程和单元组合重构   总被引:1,自引:0,他引:1  
文中将机器人装配单元的各类设备定义为独立的Agent,采用Petri Net模型,为各Agent的作业定义了有关操作的基本模型结构,作为基于多Agent的模型库。通过对Agent模型库元素的可视化组合重构,不仅可以实现机器人作业单元几何元素的重组,而且可以方便地实现作业任务的重组。  相似文献   

4.
This paper describes a novel organizational learning model for multiple adaptive robots. In this model, robots acquire their own appropriate functions through local interactions among their neighbors, and get out of deadlock situations without explicit control mechanisms or communication methods. Robots also complete given tasks by forming an organizational structure, and improve their organizational performance. We focus on the emergent processes of collective behaviors in multiple robots, and discuss how to control these behaviors with only local evaluation functions, rather than with a centralized control system. Intensive simulations of truss construction by multiple robots gave the following experimental results: (1) robots in our model acquire their own appropriate functions and get out of deadlock situations without explicit control mechanisms or communication methods; (2) robots form an organizational structure which completes given tasks in fewer steps than are needed with a centralized control mechanism. This work was presented, in part, at the Second International Symposium on Artificial Life and Robotics, Oita, Japan, February 18–20, 1997  相似文献   

5.
Assistant robots have received special attention from the research community in the last years. One of the main applications of these robots is to perform care tasks in indoor environments such as houses, nursing homes or hospitals, and therefore they need to be able to navigate robustly for long periods of time. This paper focuses on the navigation system of SIRA, a robotic assistant for elderly and/or blind people based on a Partially Observable Markov Decision Process (POMDP) to global localize the robot and to direct its goal-oriented actions. The main novel feature of our approach is that it combines sonar and visual information in a natural way to produce state transitions and observations in the framework of Markov Decision Processes. Besides this multisensorial fusion, a two-level layered planning architecture that combines several planning objectives (such as guiding to a goal room and reducing locational uncertainty) improves the robustness of the navigation system, as its shown in our experiments with SIRA navigating corridors.  相似文献   

6.
The emergence of service robots in our environment raises the need to find systems that help the robots in the task of managing the information from human environments. A semantic model of the environment provides the robot with a representation closer to the human perception, and it improves its human-robot communication system. In addition, a semantic model will improve the capabilities of the robot to carry out high level navigation tasks. This paper presents a semantic relational model that includes conceptual and physical representation of objects and places, utilities of the objects, and semantic relation among objects and places. This model allows the robot to manage the environment and to make queries about the environment in order to do plans for navigation tasks. In addition, this model has several advantages such as conceptual simplicity and flexibility of adaptation to different environments. To test the performance of the proposed semantic model, the output for the semantic inference system is associate to the geometric and topological information of objects and places in order to do the navigation tasks.  相似文献   

7.
In spite of the radical enhancement of Web technologies, many users still continue to experience severe difficulties in navigating Web systems. One way to reduce the navigation difficulties is to provide context information that explains the current situation of Web users. In this study, we empirically examined the effects of 2 types of context information, structural and temporal context. In the experiment, we evaluated the effectiveness of the contextual navigation aids in 2 different types of Web systems, an electronic commerce system that has a well-defined structure and a content dissemination system that has an ill-defined structure. In our experiment, participants answered a set of postquestionnaires after performing several searching and browsing tasks. The results of the experiment reveal that the 2 types of contextual navigation aids significantly improved the performance of the given tasks regardless of different Web systems and different task types. Moreover, context information changed the users' navigation patterns and increased their subjective convenience of navigation. This study concludes with implications for understanding the users' searching and browsing patterns and for developing effective navigation systems.  相似文献   

8.
This paper proposes a fast image sequence-based navigation approach for a flat route represented in sparse waypoints. Instead of purely optimizing the length of the path, this paper aims to speed up the navigation by lengthening the distance between consecutive waypoints. When local visual homing in a variable velocity is applied for robot navigation between two waypoints, the robot's speed changes according to the distance between waypoints. Because long distance implies large scale difference between the robot's view and the waypoint image, log-polar transform is introduced to find a correspondence between images and infer a less accurate motion vector. In order to maintain the navigation accuracy, our prior work on local visual homing with SIFT feature matching is adopted when the robot is relatively close to the waypoint. Experiments support the proposed navigation approach in a multiple-waypoint route. Compared to other prior work on visual homing with SIFT feature matching, the proposed navigation approach requires fewer waypoints and the navigation speed is improved without compromising the accuracy in navigation.  相似文献   

9.
目的 视觉目标跟踪中,不同时刻的目标状态是利用在线学习的模板数据线性组合近似表示。由于跟踪中目标受到自身或场景中各种复杂干扰因素的影响,跟踪器的建模能力很大程度地依赖模板数据的概括性及其误差的估计精度。很多现有算法以向量形式表示样本信号,而改变其原始数据结构,使得样本数据各元素之间原有的自然关系受到严重破坏;此外,这种数据表述机制会提高数据的维度,而带来一定的计算复杂度和资源浪费。本文以多线性分析的角度更进一步深入研究视频跟踪中的数据表示及其建模机制,为其提供更加紧凑有效的解决方法。方法 本文跟踪框架中,候选样本及其重构信号以张量形式表示,从而保证其数据的原始结构。跟踪器输出候选样本外观状态时,以张量良好的多线性特性来组织跟踪系统的建模任务,利用张量核范数及L1范数正则化其目标函数的相关成分,在多任务状态学习假设下充分挖掘各候选样本外观表示任务的独立性及相互依赖关系。结果 用结构化张量表示的数据原型及其多任务观测模型能够较为有效地解决跟踪系统的数据表示及计算复杂度难题。同时,为候选样本外观模型的多任务联合学习提供更加简便有效的解决途径。这样,当跟踪器遇到破坏性较强的噪声干扰时,其张量核范数约束的误差估计机制在多任务联合学习框架下更加充分挖掘目标全面信息,使其更好地适应内在或外在因素所引起的视觉信息变化。在一些公认测试视频上的实验结果表明,本文算法在候选样本外观模型表示方面表现出更为鲁棒的性能。因而和一些优秀的同类算法相比,本文算法在各测试序列中跟踪到的目标图像块平均中心位置误差和平均重叠率分别达到4.2和0.82,体现出更好的跟踪精度。结论 大量实验验证本文算法的张量核范数回归模型及其误差估计机制能够构造出目标每一时刻状态更接近的最佳样本信号,在多任务学习框架下严格探测每一个候选样本的真实状态信息,从而较好地解决模型退化和跟踪漂移问题。  相似文献   

10.
《Advanced Robotics》2013,27(4):463-480
This paper addresses the problem of positioning a robot camera with respect to a fixed object in space by means of visual information. The ultimate goal of positioning is to achieve and/or to maintain a given spatial configuration (position and orientation) with respect to the objects in the environment so as to execute at best the task at hand. Positioning involves the control of 6 d.o.f. in space, which are conveniently referred to as the parameters of the transformation between a camera-centered frame and an object-centered frame. In this paper, we will address the positioning problem referring to these d.o.f.'s, regardless of the specific robot configuration used to move the camera (e.g. eye-in-hand setup, navigation platform with a robot head mounted on it, etc.). The domain of application ranges from navigation tasks, (e.g. localization, docking, steering by means of natural landmarks), grasping and manipulation tasks, and autonomous/intelligent tasks based on active visual behaviors such as reading a book or reaching and commanding a control panel. The solution proposed in this work is to exploit the changes in shape of contours in order to plan and control the positioning process. In order to simplify and speed up the calculations, an affine camera model is used to describe the changes of shape of the contours in the image plane and an affine visual servoing (AVS) approach is derived. The choice of using two-dimensional (2D) features for control greatly enhances the robustness of the positioning process, in that robot kinematics and camera modeling errors are reduced. Among the possible 2D features, visual contours enable us to achieve robust visual estimates while keeping the dimensionality of the control equations low; the same would not be possible using different features such as points or lines. Finally, a feedforward control strategy complements the feedback loop, thereby enhancing the speed and the overall performance of the algorithm. Although a stability analysis of the control scheme has not been performed yet, good simulation results with stable behavior, provided that proper tuning of control parameters and gains has been done, suggest that the approach might be successfully applied in real world cases.  相似文献   

11.

Deep learning techniques have shown success in learning from raw high-dimensional data in various applications. While deep reinforcement learning is recently gaining popularity as a method to train intelligent agents, utilizing deep learning in imitation learning has been scarcely explored. Imitation learning can be an efficient method to teach intelligent agents by providing a set of demonstrations to learn from. However, generalizing to situations that are not represented in the demonstrations can be challenging, especially in 3D environments. In this paper, we propose a deep imitation learning method to learn navigation tasks from demonstrations in a 3D environment. The supervised policy is refined using active learning in order to generalize to unseen situations. This approach is compared to two popular deep reinforcement learning techniques: deep-Q-networks and Asynchronous actor-critic (A3C). The proposed method as well as the reinforcement learning methods employ deep convolutional neural networks and learn directly from raw visual input. Methods for combining learning from demonstrations and experience are also investigated. This combination aims to join the generalization ability of learning by experience with the efficiency of learning by imitation. The proposed methods are evaluated on 4 navigation tasks in a 3D simulated environment. Navigation tasks are a typical problem that is relevant to many real applications. They pose the challenge of requiring demonstrations of long trajectories to reach the target and only providing delayed rewards (usually terminal) to the agent. The experiments show that the proposed method can successfully learn navigation tasks from raw visual input while learning from experience methods fail to learn an effective policy. Moreover, it is shown that active learning can significantly improve the performance of the initially learned policy using a small number of active samples.

  相似文献   

12.
The design of reliable navigation and control systems for Unmanned Aerial Vehicles (UAVs) based only on visual cues and inertial data has many unsolved challenging problems, ranging from hardware and software development to pure control-theoretical issues. This paper addresses these issues by developing and implementing an adaptive vision-based autopilot for navigation and control of small and mini rotorcraft UAVs. The proposed autopilot includes a Visual Odometer (VO) for navigation in GPS-denied environments and a nonlinear control system for flight control and target tracking. The VO estimates the rotorcraft ego-motion by identifying and tracking visual features in the environment, using a single camera mounted on-board the vehicle. The VO has been augmented by an adaptive mechanism that fuses optic flow and inertial measurements to determine the range and to recover the 3D position and velocity of the vehicle. The adaptive VO pose estimates are then exploited by a nonlinear hierarchical controller for achieving various navigational tasks such as take-off, landing, hovering, trajectory tracking, target tracking, etc. Furthermore, the asymptotic stability of the entire closed-loop system has been established using systems in cascade and adaptive control theories. Experimental flight test data over various ranges of the flight envelope illustrate that the proposed vision-based autopilot performs well and allows a mini rotorcraft UAV to achieve autonomously advanced flight behaviours by using vision.  相似文献   

13.
Task Modelling in Collective Robotics   总被引:3,自引:0,他引:3  
Does coherent collective behaviour require an explicit mechanism of cooperation? In this paper, we demonstrate that a certain class of cooperative tasks, namely coordinated box manipulation, are possible without explicit communication or cooperation mechanisms. The approach relies on subtask decomposition and sensor preprocessing. A framework is proposed for modelling multi-robot tasks which are described as a series of steps with each step possibly consisting of substeps. Finite state automata theory is used to model steps with state transitions specified as binary sensing predicates called perceptual cues. A perceptual cue (Q), whose computation is disjoint from the operation of the automata, is processed by a 3-level finite state machine called a Q-machine. The model is based on entomological evidence that suggests local stimulus cues are used to regulate a linear series of building acts in nest construction. The approach is designed for a redundant set of homogeneous mobile robots, and described is an extension of a previous system of 5 box-pushing robots to 11 identical transport robots. Results are presented for a system of physical robots capable of moving a heavy object collectively to an arbitrarily specified goal position. The contribution is a simple task-programming paradigm for mobile multi-robot systems. It is argued that Q-machines and their perceptual cues offer a new approach to environment-specific task modelling in collective robotics.  相似文献   

14.
Performing general human behavior by experts’ navigation is expected to be realized as wearable technologies and computing systems are further developed. We have proposed and developed the prototype of the advanced behavior navigation system (BNS) using augmented reality technology. Utilizing the BNS, an expert can guide a non-expert to perform a variety of tasks. The BNSs are useful in tasks to be performed in harsh and hazardous environments, such as factories, construction sites, and areas affected by natural disasters (e.g. earthquakes and tsunamis). In this paper, we present a BNS that is specifically designed to operate in harsh environments, with characteristics such as wet or dusty conditions. The implementation, experimental results, and evaluation of the BNS prototypes are presented.  相似文献   

15.
The massive diffusion of smartphones, the growing interest in wearable devices and the Internet of Things, and the exponential rise of location based services (LBSs) have made the problem of localization and navigation inside buildings one of the most important technological challenges of recent years. Indoor positioning systems have a huge market in the retail sector and contextual advertising; in addition, they can be fundamental to increasing the quality of life for citizens if deployed inside public buildings such as hospitals, airports, and museums. Sometimes, in emergency situations, they can make the difference between life and death. Various approaches have been proposed in the literature. Recently, thanks to the high performance of smartphones’ cameras, marker-less and marker-based computer vision approaches have been investigated. In a previous paper, we proposed a technique for indoor localization and navigation using both Bluetooth low energy (BLE) and a 2D visual marker system deployed into the floor. In this paper, we presented a qualitative performance evaluation of three 2D visual markers, Vuforia, ArUco marker, and AprilTag, which are suitable for real-time applications. Our analysis focused on specific case study of visual markers placed onto the tiles, to improve the efficiency of our indoor localization and navigation approach by choosing the best visual marker system.  相似文献   

16.
视觉导航旨在通过环境中的视觉信息提供导航依据, 其中关键任务之一就是目标检测. 传统的目标检测方法需要大量的标注, 且只关注图像本身, 并未充分利用视觉导航任务中的数据相似性. 针对以上问题, 本文提出一种基于历史图像信息的自监督训练任务. 该方法聚合同一位置的多时刻图像, 通过信息熵区分前景与背景, 将图像增强后传入SimSiam自监督范式进行训练. 并改进SimSiam投影层和预测层中的MLP为卷积注意力模块和卷积模块, 改进损失函数为多维向量间损失, 以提取图像中的多维特征. 最后, 将自监督预训练所得模型用于下游任务的训练. 实验表明, 在处理后的nuScenes数据集上, 本文提出的方法有效提高了下游分类及检测任务的精度, 在下游分类任务上Top5准确率达到66.95%, 检测任务上mAP达到40.02%.  相似文献   

17.
对于目前常用的定位系统(例如GPS),在存在遮挡条件或者在室内执行任务时,往往会出现定位不准,无法识别区域位置等问题,这使得机器人在移动过程中无法正确地进行判断,很可能无法移动至目的地。针对移动机器人在未知环境下的定位不准,无法识别区域位置等问题,设计了一个ROS系统的激光SLAM视觉智能勘察小车,通过结合激光SLAM与深度摄像头,提升小车的数据采集能力,并结合ROS系统的图形化模拟环境,对智能小车的位置进行估计并构建地图,实现了小车的自主定位和导航。经测试,在室内或遮蔽环境下相比采用传统雷达SLAM或视觉SLAM具有更高的定位精度,并且反应快,可以进行实时地图构建,解决了在遮挡条件或者在室内执行任务时出现的问题,使得机器人在地图构建之后能够准确进行判断前往目的地。  相似文献   

18.
无人飞艇的基于计算机视觉导航和预设航线跟踪控制   总被引:2,自引:0,他引:2  
小型无人飞艇在反恐防暴、灾难监控、大气监测、广告、航拍、交通监控、应急通信中继平台等很多方面都有广阔的应用前景.这些应用都要求飞艇具有自动跟踪预设航线的能力.因此,本文提出了基于计算机视觉导航和优化模糊控制的策略来实现预设航线的自动跟踪.文中首先详细介绍了基于自然标志的视觉导航原理,无人飞艇机载视觉系统通过跟踪这些特征点——易在图像处理中识别的自然标志,例如城市中的高层建筑物,再结合数字地图或GIS,获取它们的位置和几何特征信息,利用针孔模型成像的几何关系解算出飞艇的位置和方向;接着根据无人飞艇的动力学特性,设计了以视觉导航获取的飞艇位置和方向信息作为输入的模糊飞控系统,并用GA算法优化了模糊控制器的成员关系函数.最后进行了验证和分析.  相似文献   

19.
We have developed a technology for a robot that uses an indoor navigation system based on visual methods to provide the required autonomy. For robots to run autonomously, it is extremely important that they are able to recognize the surrounding environment and their current location. Because it was not necessary to use plural external world sensors, we built a navigation system in our test environment that reduced the burden of information processing mainly by using sight information from a monocular camera. In addition, we used only natural landmarks such as walls, because we assumed that the environment was a human one. In this article we discuss and explain two modules: a self-position recognition system and an obstacle recognition system. In both systems, the recognition is based on image processing of the sight information provided by the robot’s camera. In addition, in order to provide autonomy for the robot, we use an encoder and information from a two-dimensional space map given beforehand. Here, we explain the navigation system that integrates these two modules. We applied this system to a robot in an indoor environment and evaluated its performance, and in a discussion of our experimental results we consider the resulting problems.  相似文献   

20.
Autonomous navigation system using Event Driven-Fuzzy Cognitive Maps   总被引:2,自引:2,他引:0  
This study developed an autonomous navigation system using Fuzzy Cognitive Maps (FCM). Fuzzy Cognitive Map is a tool that can model qualitative knowledge in a structured way through concepts and causal relationships. Its mathematical representation is based on graph theory. A new variant of FCM, named Event Driven-Fuzzy Cognitive Maps (ED-FCM), is proposed to model decision tasks and/or make inferences in autonomous navigation. The FCM??s arcs are updated from the occurrence of special events as dynamic obstacle detection. As a result, the developed model is able to represent the robot??s dynamic behavior in presence of environment changes. This model skill is achieved by adapting the FCM relationships among concepts. A reinforcement learning algorithm is also used to finely adjust the robot behavior. Some simulation results are discussed highlighting the ability of the autonomous robot to navigate among obstacles (navigation at unknown environment). A fuzzy based navigation system is used as a reference to evaluate the proposed autonomous navigation system performance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号