Perceptual-based distributed video coding |
| |
Authors: | Yu-Chen SunChun-Jen Tsai |
| |
Affiliation: | Dept. of Computer Science, National Chiao Tung University, Hsinchu 30010, Taiwan |
| |
Abstract: | In this paper, we propose a perceptual-based distributed video coding (DVC) technique. Unlike traditional video codecs, DVC applies video prediction process at the decoder side using previously received frames. The predicted video frames (i.e., side information) contain prediction errors. The encoder then transmits error-correcting parity bits to the decoder to reconstruct the video frames from side information. However, channel codes based on i.i.d. noise models are not always efficient in correcting video prediction errors. In addition, some of the prediction errors do not cause perceptible visual distortions. From perceptual coding point of view, there is no need to correct such errors. This paper proposes a scheme for the decoder to perform perceptual quality analysis on the predicted side information. The decoder only requests parity bits to correct visually sensitive errors. More importantly, with the proposed technique, key frames can be encoded at higher rates while still maintaining consistent visual quality across the video sequence. As a result, even the objective PSNR measure of the decoded video sequence will increase too. Experimental results show that the proposed technique improves the R-D performance of a transform domain DVC codec both subjectively and objectively. Comparisons with a well-known DVC codec show that the proposed perceptual-based DVC coding scheme is very promising for distributed video coding framework. |
| |
Keywords: | Distributed video coding Perceptual-based coding Wyner-Ziv coding Region-of-interest analysis Motion consistency analysis Texture consistency analysis Visual distortion estimation Side-information error classification |
本文献已被 ScienceDirect 等数据库收录! |
|