首页 | 本学科首页   官方微博 | 高级检索  
     


Visually Guided Cooperative Robot Actions Based on Information Quality
Authors:Email author" target="_blank">Vivek?A?SujanEmail author  Steven?Dubowsky
Affiliation:(1) Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
Abstract:In field environments it is not usually possible to provide robots in advance with valid geometric models of its environment and task element locations. The robot or robot teams need to create and use these models to locate critical task elements by performing appropriate sensor based actions. This paper presents a multi-agent algorithm for a manipulator guidance task based on cooperative visual feedback in an unknown environment. First, an information-based iterative algorithm to intelligently plan the robotrsquos visual exploration strategy is used to enable it to efficiently build 3D models of its environment and task elements. The algorithm uses the measured scene information to find the next camera position based on expected new information content of that pose. This is achieved by utilizing a metric derived from Shannonrsquos information theory to determine optimal sensing poses for the agent(s) mapping a highly unstructured environment. Second, after an appropriate environment model has been built, the quality of the information content in the model is used to determine the constraint-based optimum view for task execution. The algorithm is applicable for both an individual agent as well as multiple cooperating agents. Simulation and experimental demonstrations on a cooperative robot platform performing a two component insertion/mating task in the field show the effectiveness of this algorithm.
Keywords:cooperative robots  information theory  unstructured environments
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号