A scalable deep neural network to detect low quality images without a reference
2021
Online streaming services have been growing at a fast pace. To provide the best user experience, it is needed to detect low quality images from videos so that we can repair or improve them before showing to customers. For example, for movie and TV-show streaming services, it is important to check if an original (master) video produced from a studio has low quality images that contain artifacts such as up-scaling or interlacing; for live streaming services, it is important to detect if a streamed video have hits due to encoding such as H.264 or MPEG2. The impairment detection is usually measured by the nonreference (NR) metrics because it is often difficult and sometimes impossible to get the original (master) videos. On the other hand, today researches in the image quality area, such as super-resolution, are mainly focused on the full reference (F R) metrics like PSNR or VMAF. In this paper, we present an algorithm that is able to reliably compute five types of spatial NR metrics that are commonly used in the online streaming industries. The algorithm consists of two components: a pre-processing step that spatially de-correlates pixel intensity values and a novel deep neural network (DNN) that is able to quantify the NR metrics at the image region level. We show that our algorithm achieves better performance than state-of-art algorithms in this area.
Research areas