首页 | 本学科首页   官方微博 | 高级检索  
     


Global contextual guided residual attention network for salient object detection
Authors:Wang  Jun  Zhao  Zhengyun  Yang  Shangqin  Chai  Xiuli  Zhang  Wanjun  Zhang  Miaohui
Affiliation:1.School of Artificial Intelligence, Henan University, Kaifeng, 475004, China
;2.Henan Key Laboratory of Big Data Analysis and Processing, Henan University, Kaifeng, 475004, China
;
Abstract:

High-level semantic features and low-level detail features matter for salient object detection in fully convolutional neural networks (FCNs). Further integration of low-level and high-level features increases the ability to map salient object features. In addition, different channels in the same feature are not of equal importance to saliency detection. In this paper, we propose a residual attention learning strategy and a multistage refinement mechanism to gradually refine the coarse prediction in a scale-by-scale manner. First, a global information complementary (GIC) module is designed by integrating low-level detailed features and high-level semantic features. Second, to extract multiscale features of the same layer, a multiscale parallel convolutional (MPC) module is employed. Afterwards, we present a residual attention mechanism module (RAM) to receive the feature maps of adjacent stages, which are from the hybrid feature cascaded aggregation (HFCA) module. The HFCA aims to enhance feature maps, which reduce the loss of spatial details and the impact of varying the shape, scale and position of the object. Finally, we adopt multiscale cross-entropy loss to guide network learning salient features. Experimental results on six benchmark datasets demonstrate that the proposed method significantly outperforms 15 state-of-the-art methods under various evaluation metrics.

Keywords:
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号