首页 | 本学科首页   官方微博 | 高级检索  
     


Digital twin-enabled grasp outcomes assessment for unknown objects using visual-tactile fusion perception
Affiliation:1. School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China;2. Department of Production Engineering, KTH Royal Institute of Technology, Stockholm 10044, Sweden;3. Beijing Institute of Control Engineering, Beijing 100191, China;1. State Key Laboratory of Mechanical Transmission, Chongqing University, Chongqing, 400044, China;2. State Key Laboratory of Public Big Data, Guizhou University, Guiyang, 550025, China;1. School of Engineering, The University of Warwick, Coventry CV4 7AL, UK;2. School of Marine Science and Technology, Tianjin University, Tianjin 300072, PR China;1. School of Mechatronic Engineering and Automation, Shanghai University, Shanghai, 200444, China;2. Department of Mechanical and Mechatronics Engineering, The University of Auckland, Auckland, 1010, New Zealand;3. Department of Aerospace Engineering, Toronto Metropolitan University, Toronto, ON M5B 2K3, Canada;4. Shanghai Key Laboratory of Intelligent Manufacturing and Robotics, Shanghai, 200072, China;1. Advanced Remanufacturing and Technology Centre (ARTC), A*STAR, 3 Cleantech Loop, 637143, Singapore;2. School of Mechanical and Aerospace Engineering, Nanyang Technological University, 639798, Singapore;3. Institute of Intelligent Manufacturing, Guangdong Academy of Sciences, Guangzhou, 510070, China;4. Singapore Institute of Manufacturing Technology (SIMTech), A*STAR, 5 Cleantech Loop, 636732, Singapore;1. School of Mechanical Engineering, Hefei University of Technology, Hefei 230009, China;2. School of Electrical Engineering and Automation, Hefei University of Technology, Hefei 230009, China;3. Mechanical Engineering Department, Sana''a University, Sana''a 31220, Yemen;1. Hubei Key Laboratory of Advanced Technology for Automotive Components, Wuhan University of Technology, Wuhan 430070, China;2. Hubei Collaborative Innovation Center for Automotive Components Technology, Wuhan University of Technology, Wuhan 430070, China;3. Hubei Longzhong Laboratory, Xiangyang 441000, China
Abstract:Humans can instinctively predict whether a given grasp will be successful through visual and rich haptic feedback. Towards the next generation of smart robotic manufacturing, robots must be equipped with similar capabilities to cope with grasping unknown objects in unstructured environments. However, most existing data-driven methods take global visual images and tactile readings from the real-world system as input, making them incapable of predicting the grasp outcomes for cluttered objects or generating large-scale datasets. First, this paper proposes a visual-tactile fusion method to predict the results of grasping cluttered objects, which is the most common scenario for grasping applications. Concretely, the multimodal fusion network (MMFN) uses the local point cloud within the gripper as the visual signal input, while the tactile signal input is the images provided by two high-resolution tactile sensors. Second, collecting data in the real world is high-cost and time-consuming. Therefore, this paper proposes a digital twin-enabled robotic grasping system to collect large-scale multimodal datasets and investigates how to apply domain randomization and domain adaptation to bridge the sim-to-real transfer gap. Finally, extensive validation experiments are conducted in physical and virtual environments. The experimental results demonstrate the effectiveness of the proposed method in assessing grasp stability for cluttered objects and performing zero-shot sim-to-real policy transfer on the real robot with the aid of the proposed migration strategy.
Keywords:
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号