Abstract: | No-reference image quality assessment (NR-IQA) based on deep learning attracts a great research attention recently. However, its performance in terms of accuracy and efficiency is still under exploring. To address these issues, in this paper, we propose a quality-distinguishing and patch-comparing NR-IQA approach based on convolutional neural network (QDPC-CNN). We improve the prediction accuracy by two proposed mechanisms: quality-distinguishing adaption and patch-comparing regression. The former trains multiple models from different subsets of a dataset and adaptively selects one for predicting quality score of a test image according to its quality level, and the latter generates patch pairs for regression under different combination strategies to make better use of reference images in network training and enlarge training data at the same time. We further improve the efficiency of network training by a new patch sampling way based on the visual importance of each patch. We conduct extensive experiments on several public databases and compare our proposed QDPC-CNN with existing state-of-the-art methods. The experimental results demonstrate that our proposed method outperforms the others both in terms of accuracy and efficiency. |