首页 | 本学科首页   官方微博 | 高级检索  
     


Saliency-based deep convolutional neural network for no-reference image quality assessment
Authors:Sen Jia  Yang Zhang
Affiliation:1.Intelligent Systems Laboratory,University of Bristol,Bristol,UK;2.Bristol Vision Institute,University of Bristol,Bristol,UK
Abstract:In this paper, we proposed a novel method for No-Reference Image Quality Assessment (NR-IQA) by combining deep Convolutional Neural Network (CNN) with saliency map. We first investigate the effect of depth of CNNs for NR-IQA by comparing our proposed ten-layer Deep CNN (DCNN) for NR-IQA with the state-of-the-art CNN architecture proposed by Kang et al. (2014). Our results show that the DCNN architecture can deliver a higher accuracy on the LIVE dataset. To mimic human vision, we introduce saliency maps combining with CNN to propose a Saliency-based DCNN (SDCNN) framework for NR-IQA. We compute a saliency map for each image and both the map and the image are split into small patches. Each image patch is assigned with a patch importance value based on its saliency patch. A set of Salient Image Patches (SIPs) are selected according to their saliency and we only apply the model on those SIPs to predict the quality score for the whole image. Our experimental results show that the SDCNN framework is superior to other state-of-the-art approaches on the widely used LIVE dataset. The TID2008 and the CISQ image quality datasets are utilised to report cross-dataset results. The results indicate that our proposed SDCNN can generalise well on other datasets.
Keywords:
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号