首页 | 本学科首页   官方微博 | 高级检索  
     


Classifier aided training for semantic segmentation
Affiliation:1. Cyberspace Institute of Advanced Technology, Guangzhou University, Guangzhou 510006, PR China;2. Institute of Interdisciplinary Information Sciences, Tsinghua University, Beijing 100084, PR China;3. Huakong TsingJiao Information Science (Beijing) Limited (TsingJiao), Beijing 100084, PR China;4. Big Data Center of State Grid Corporation of China, Beijing 100031, PR China;1. College of Electrical and Information Engineering, Hunan University, Changsha 410082, China;2. National Engineering Laboratory for Robot Vision Perception and Control, Hunan Key Laboratory of Intelligent Robot Technology in Electronic Manufacturing, Hunan University, Changsha 410082, China;3. Department of Mechanical Engineering, University of Alberta, Edmonton, Alberta T6G2E1, Canada;4. Department of Mechanical Engineering, York University, Toronto, ON M3J1P3, Canada
Abstract:Semantic segmentation is a prominent problem in scene understanding expressed as a dense labeling task with deep learning models being one of the main methods to solve it. Traditional training algorithms for semantic segmentation models produce less than satisfactory results when not combined with post-processing techniques such as CRFs. In this paper, we propose a method to train segmentation models using an approach which utilizes classification information in the training process of the segmentation network. Our method employs the use of classification network that detects the presence of classes in the segmented output. These class scores are then used to train the segmentation model. This method is motivated by the fact that by conditioning the training of the segmentation model with these scores, higher order features can be captured. Our experiments show significantly improved performance of the segmentation model on the CamVid and CityScapes datasets with no additional post processing.
Keywords:Scene understanding  Semantic segmentation  Computer vision  Deep learning
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号