首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Kim  Minkyu  Sentis  Luis 《Applied Intelligence》2022,52(12):14041-14052

When performing visual servoing or object tracking tasks, active sensor planning is essential to keep targets in sight or to relocate them when missing. In particular, when dealing with a known target missing from the sensor’s field of view, we propose using prior knowledge related to contextual information to estimate its possible location. To this end, this study proposes a Dynamic Bayesian Network that uses contextual information to effectively search for targets. Monte Carlo particle filtering is employed to approximate the posterior probability of the target’s state, from which uncertainty is defined. We define the robot’s utility function via information theoretic formalism as seeking the optimal action which reduces uncertainty of a task, prompting robot agents to investigate the location where the target most likely might exist. Using a context state model, we design the agent’s high-level decision framework using a Partially-Observable Markov Decision Process. Based on the estimated belief state of the context via sequential observations, the robot’s navigation actions are determined to conduct exploratory and detection tasks. By using this multi-modal context model, our agent can effectively handle basic dynamic events, such as obstruction of targets or their absence from the field of view. We implement and demonstrate these capabilities on a mobile robot in real-time.

  相似文献   

2.
A Complexity-Level Analysis of the Sensor Planning Task for Object Search   总被引:2,自引:0,他引:2  
Object search is the task of searching for a given 3 D object in a given 3 D environment by a controllable camera. Sensor planning for object search refers to the task of how to select the sensing parameters of the camera so as to bring the target into the field of view of the camera and to make the image of the target to be easily recognized by the available recognition algorithms. In this paper, we study the task of sensor planning for object search from the theoretical point of view. We formulate the task and point out many of its important properties. We then analyze this task from the complexity level and prove that this task is NP-Complete.  相似文献   

3.
《Advanced Robotics》2013,27(8):751-771
We propose a new method of sensor planning for mobile robot localization using Bayesian network inference. Since we can model causal relations between situations of the robot's behavior and sensing events as nodes of a Bayesian network, we can use the inference of the network for dealing with uncertainty in sensor planning and thus derive appropriate sensing actions. In this system we employ a multi-layered-behavior architecture for navigation and localization. This architecture effectively combines mapping of local sensor information and the inference via a Bayesian network for sensor planning. The mobile robot recognizes the local sensor patterns for localization and navigation using a learned regression function. Since the environment may change during the navigation and the sensor capability has limitations in the real world, the mobile robot actively gathers sensor information to construct and reconstruct a Bayesian network, and then derives an appropriate sensing action which maximizes a utility function based on inference of the reconstructed network. The utility function takes into account belief of the localization and the sensing cost. We have conducted some simulation and real robot experiments to validate the sensor planning system.  相似文献   

4.
In this paper, we propose a hierarchical approach to solving sensor planning for the global localization of a mobile robot. Our system consists of two subsystems: a lower layer and a higher layer. The lower layer uses a particle filter to evaluate the posterior probability of the localization. When the particles converge into clusters, the higher layer starts particle clustering and sensor planning to generate an optimal sensing action sequence for the localization. The higher layer uses a Bayesian network for probabilistic inference. The sensor planning takes into account both localization belief and sensing cost. We conducted simulations and actual robot experiments to validate our proposed approach.  相似文献   

5.

In this paper, we consider a problem of autonomous search using single or multiple Unmanned Ariel Vehicles (UAVs) mounted with downward-facing cameras. A model of the effectiveness of the search sensor, camera, in this case, is essential for developing strategies for optimal deployment and path planning of UAVs for efficient search. The probability of detection of a target of interest as a function of its distance from the point directly below the camera is used to model the search effectiveness. We carried out experiments and obtained a search effectiveness model for a camera in the laboratory environment using ArUco markers and triangular shapes as targets.

  相似文献   

6.
We present a robust target tracking algorithm for a mobile robot. It is assumed that a mobile robot carries a sensor with a fan-shaped field of view and finite sensing range. The goal of the proposed tracking algorithm is to minimize the probability of losing a target. If the distribution of the next position of a moving target is available as a Gaussian distribution from a motion prediction algorithm, the proposed algorithm can guarantee the tracking success probability. In addition, the proposed method minimizes the moving distance of the mobile robot based on the chosen bound on the tracking success probability. While the considered problem is a non-convex optimization problem, we derive a closed-form solution when the heading is fixed and develop a real-time algorithm for solving the considered target tracking problem. We also present a robust target tracking algorithm for aerial robots in 3D. The performance of the proposed method is evaluated extensively in simulation. The proposed algorithm has been successful applied in field experiments using Pioneer mobile robot with a Microsoft Kinect sensor for following a pedestrian.  相似文献   

7.
We consider the problem of planning sensor control strategies that enable a sensor to be automatically configured for robot tasks. In this paper we present robust and efficient algorithms for computing the regions from which a sensor has unobstructed or partially obstructed views of a target in a goal. We apply these algorithms to the Error Detection and Recovery problem of recognizing whether a goal or failure region has been achieved. Based on these methods and strategies for visually cued camera control, we have built a robot surveillance system in which one mobile robot navigates to a viewing position from which it has an unobstructed view of a goal region, and then uses visual recognition to detect when a specific target has entered the region. Received November 11, 1996; revised January 12, 1998.  相似文献   

8.
Robot intelligence requires a real-time connection between sensing and action. A new computation principle of robotics that efficiently implements such a connection is utmost important for the new generation of robotics. In this paper, a perception–action network is presented as a means of efficiently integrating sensing, knowledge, and action for sensor fusion and planning. The network consists of a number of heterogeneous computational units, representing feature transformation and decision-making for action, which are interconnected as a dynamic system. New input stimuli to the network invoke the evolution of network states to a new equilibrium, through which a real-time integration of sensing, knowledge, and action can be accomplished. The network provides a formal, yet general and efficient, method of achieving sensor fusion and planning. This is because the uncertainties of signals, propagated in the network, can be controlled by modifying sensing parameters and robot actions. Algorithms for sensor planning based on the proposed network are established and applied to robot self-localization. Simulation and experimental results are shown.  相似文献   

9.
计算机视觉中传感器规划综述   总被引:5,自引:0,他引:5       下载免费PDF全文
在摄影测量和目标识别计算机视觉的应用中,为了提高三维定位精度,有必要采用传感器规划策略,即预先在摄像机和照明参数进行规划,因为这样可以更有效地完成视觉任务,为了使人们对计算机视觉中传感器规划有一概略了解,首先分析了传感器和照明参数、特征检测约束、传感器和目标模型等影响传感器规划的各种因素;然后在总结传感器规划最近研究进展的基础上,根据传感器规划所使用方法等的不同,对传感器规划问题进行了分类,并将传感器规划问题归纳为组合优化的过程;最后指出传感器规划今后几个可能的研究方向。  相似文献   

10.
高水平的智能机器人要求能够独立地对环境进行感知并进行正确的行动推理.在情境演算行动理论中表示带有感知行动及知识的行动推理需要外部设计者为agent写出背景公理、感知结果及相应的知识变化,这是一种依赖于设计者的行动推理.情境演算行动理论被适当扩充,感知器的表示被添加到行动理论的形式语言中,并把agent新知识的产生建立在感知器的应用结果之上.扩充后的系统能够形式化地表示机器人对环境的感知并把感知结果转换为知识,还能进行独立于设计者的行动推理,同时让感知行动的"黑箱"过程清晰化.  相似文献   

11.
Autonomous environment mapping is an essential part of efficiently carrying out complex missions in unknown indoor environments. In this paper, a low cost mapping system composed of a web camera with structured light and sonar sensors is presented. We propose a novel exploration strategy based on the frontier concept using the low cost mapping system. Based on the complementary characteristics of a web camera with structured light and sonar sensors, two different sensors are fused to make a mobile robot explore an unknown environment with efficient mapping. Sonar sensors are used to roughly find obstacles, and the structured light vision system is used to increase the occupancy probability of obstacles or walls detected by sonar sensors. To overcome the inaccuracy of the frontier-based exploration, we propose an exploration strategy that would both define obstacles and reveal new regions using the mapping system. Since the processing cost of the vision module is high, we resolve the vision sensing placement problem to minimize the number of vision sensing in analyzing the geometry of the proposed sonar and vision probability models. Through simulations and indoor experiments, the efficiency of the proposed exploration strategy is proved and compared to other exploration strategies.   相似文献   

12.
By virtue of their high accuracy and fast response, intensity-based fibre optic proximity sensors have found applications in robot end-effector position control. Unfortunately, their characteristics show that the sensor is incapable of orientation sensing. To detect robot orientation errors, a multi-sensor approach is adopted and a detection strategy developed. The problems with real-time orientation control are studied in the context of general robot Cartesian space position control. The performance of the proposed strategy has been tested on a PUMA 560 robot and the results are presented in this paper. Successful applications of the strategy developed are demonstrated in target tracking and surface following.  相似文献   

13.
In this paper ongoing work on an approach for planning sensing actions and controlling intelligent, purposive robotic systems is presented. The method uses Bayesian decision analysis (BDA) for deciding what sensing actions should be performed. This offers a probabilistic framework that provides a more dynamic and modular behaviour than traditional rule based planners. Experiments show that the Bayesian sensor planning strategy is capable of controlling an autonomous mobile robot operating in partly known environments.  相似文献   

14.
《Computer Communications》2007,30(14-15):2721-2734
One practical goal of sensor deployment in the design of distributed sensor systems is to achieve an optimal monitoring and surveillance of a target region. The optimality of a sensor deployment scheme is a tradeoff between implementation cost and coverage quality levels. In this paper, we consider a probabilistic sensing model that provides different sensing capabilities in terms of coverage range and detection quality with different costs. A sensor deployment problem for a planar grid region is formulated as a combinatorial optimization problem with the objective of maximizing the overall detection probability within a given deployment cost. This problem is shown to be NP-complete and an approximate solution is proposed based on a two-dimensional genetic algorithm. The solution is obtained by the specific choices of genetic encoding, fitness function, and genetic operators such as crossover, mutation, translocation for this problem. Simulation results of various problem sizes are presented to show the benefits of this method as well as its comparative performance with a greedy sensor placement method.  相似文献   

15.
Consider the problem of visually finding an object in a mostly unknown space with a mobile robot. It is clear that all possible views and images cannot be examined in a practical system. Visual attention is a complex phenomenon; we view it as a mechanism that optimizes the search processes inherent in vision (Tsotsos, 2001; Tsotsos et al., 2008) [1], [2]. Here, we describe a particular example of a practical robotic vision system that employs some of these attentive processes. We cast this as an optimization problem, i.e., optimizing the probability of finding the target given a fixed cost limit in terms of total number of robotic actions required to find the visual target. Due to the inherent intractability of this problem, we present an approximate solution and investigate its performance and properties. We conclude that our approach is sufficient to solve this problem and has additional desirable empirical characteristics.  相似文献   

16.
针对单目视觉移动机器人目标跟踪的实时性和鲁棒性要求,提出了基于Kalman滤波器的改进Camshift算法检测和定位目标.将Kalman预测值作为目标初始位置,补偿摄像头和目标相对运动导致的目标在图像中的偏移.在系统“跟丢”后判断目标丢失的原因,根据原因自适应拓展搜索窗口作为Cam-shift算法的下一帧初始搜索窗口.为了验证改进算法的有效性,自主研制了一种应用该算法的履带式机器人实时目标跟踪系统.实验结果表明:该系统具有很好的鲁棒性和实时性.  相似文献   

17.
为了提升灾后救援的侦测能力,解决信息获取问题,给救援人员搜集提供更多更具体的救援信息,以制定科学高效的救援方案。通过对比地面废墟搜救机器人通用技术要求,设计了基于STM32单片机为控制核心的球形救援侦测机器人。球形机器人根据自适应搜索和人机配合辅助搜索,结合摄像头以及5.8 G图传模块将图像信息传输给用户终端,实现用户人机交互使用,通过摇杆控制机器人完成高难度的动作。同时装设的TCRT5000红外探头可以自动追寻黑色轨迹,当机器人的超声波传感器检测到障碍物时,蜂鸣器以及报警灯会动作,发出报警信号,提醒调整侦测方向,完成救援侦测任务。  相似文献   

18.
Localization is a key issue for a mobile robot, in particular in environments where a globally accurate positioning system, such as GPS, is not available. In these environments, accurate and efficient robot localization is not a trivial task, as an increase in accuracy usually leads to an impoverishment in efficiency and viceversa. Active perception appears as an appealing way to improve the localization process by increasing the richness of the information acquired from the environment. In this paper, we present an active perception strategy for a mobile robot provided with a visual sensor mounted on a pan-tilt mechanism. The visual sensor has a limited field of view, so the goal of the active perception strategy is to use the pan-tilt unit to direct the sensor to informative parts of the environment. To achieve this goal, we use a topological map of the environment and a Bayesian non-parametric estimation of robot position based on a particle filter. We slightly modify the regular implementation of this filter by including an additional step that selects the best perceptual action using Monte Carlo estimations. We understand the best perceptual action as the one that produces the greatest reduction in uncertainty about the robot position. We also consider in our optimization function a cost term that favors efficient perceptual actions. Previous works have proposed active perception strategies for robot localization, but mainly in the context of range sensors, grid representations of the environment, and parametric techniques, such as the extended Kalman filter. Accordingly, the main contributions of this work are: i) Development of a sound strategy for active selection of perceptual actions in the context of a visual sensor and a topological map; ii) Real time operation using a modified version of the particle filter and Monte Carlo based estimations; iii) Implementation and testing of these ideas using simulations and a real case scenario. Our results indicate that, in terms of accuracy of robot localization, the proposed approach decreases mean average error and standard deviation with respect to a passive perception scheme. Furthermore, in terms of efficiency, the active scheme is able to operate in real time without adding a relevant overhead to the regular robot operation.  相似文献   

19.
In this paper, we show that through self-interaction and self-observation, an anthropomorphic robot equipped with a range camera can learn object affordances and use this knowledge for planning. In the first step of learning, the robot discovers commonalities in its action-effect experiences by discovering effect categories. Once the effect categories are discovered, in the second step, affordance predictors for each behavior are obtained by learning the mapping from the object features to the effect categories. After learning, the robot can make plans to achieve desired goals, emulate end states of demonstrated actions, monitor the plan execution and take corrective actions using the perceptual structures employed or discovered during learning. We argue that the learning system proposed shares crucial elements with the development of infants of 7–10 months age, who explore the environment and learn the dynamics of the objects through goal-free exploration. In addition, we discuss goal emulation and planning in relation to older infants with no symbolic inference capability and non-linguistic animals which utilize object affordances to make action plans.  相似文献   

20.
This paper presents a sensor-based robotic system, called Plan-N-Scan, for collision-free, autonomous exploration and workspace mapping, using a wrist-mounted laser range camera. This system involves gaze planning with collision-free sensor positioning in a static environment, resulting in a 3-D map suitable for real-time collision detection. This work was initially motivated by the great demand for autonomous exploration systems in the remediation of buried but leaking tanks containing hazardous nuclear waste. Plan-N-Scan uses two types of representations: a spherical model of the manipulator and a weighted voxel map of the workspace. In addition to providing efficient collision detection, the voxel map allows the incorporation of different types of spatial occupancy information. The mapping of unknown sections of the workspace is achieved by either target or volume scanning. Target scanning incorporates a powerful A*-based search, along with a viewing position selection strategy, to incrementally acquire scans of the scene and use them to capture targets, even if they are not immediately viewable by the range camera. Volume scanning is implemented as an iterative process which automatically selects scan targets, then employs the target scanning process to scan these targets and explore the selected workspace volume. The performance and reliability of the system was demonstrated through simulation and a number of experiments involving a real robot system. The ability of the Plan-N-Scan system to incrementally acquire range information and successfully scan both targets and workspace volumes was demonstrated.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号