Patch-VQ: ‘Patching Up’ the Video Quality Problem

Zhenqiang Ying1*, Maniratnam Mandal1*, Deepti Ghadiyaram2+, Alan Bovik1+,

*+Equal contribution
1 University of Texas at Austin 2 Facebook AI
This project was supported and funded by Facebook AI.

ABSTRACT

No-reference (NR) perceptual video quality assessment (VQA) is a complex, unsolved, and important problem to social and streaming media applications. Efficient and accurate video quality predictors are needed to monitor and guide the processing of billions of shared, often imperfect, user-generated content (UGC). Unfortunately, current NR models are limited in their prediction capabilities on real-world, "in-the-wild" UGC video data. To advance progress on this problem, we created the largest (by far) subjective video quality dataset, containing 39, 000 real-world distorted videos and 117, 000 space-time localized video patches ("v-patches"), and 5.5M human perceptual quality annotations. Using this, we created two unique NR-VQA models: (a) a local-to-global region-based NR VQA architecture (called PVQ) that learns to predict global video quality and achieves state-of-the-art performance on 3 UGC datasets, and (b) a first-of-a-kind space-time video quality mapping engine (called PVQ Mapper) that helps localize and visualize perceptual distortions in space and time.


...
Sample video frames from our database, each resized to fit. The actual videos are of highly diverse sizes and resolutions.
...
A first of its kind video quality map predictor: Space-time quality maps generated on a video using our PVQ Mapper, and sampled in time for display. Four video frames are shown at top, with spatial quality maps (blended with the original frames using magma color) immediately under, while the bottom plots show the evolving quality of the video..

EXAMPLES

Generated Space-time Quality Maps


DOWNLOAD

Please submit an issue here if you encounter any difficulties.

Database
  • 5.5 M ratings

  • 6.3 k subjects

  • 39 k videos

  • 117 k patches

Download
Code
  • Pretrained Models

  • Reproducible Results

  • Training/Testing Utils


Download
Paper



  • Available on ArXiv

  • Accepted to CVPR 2021

Download