From Patches to Pictures (PaQ-2-PiQ):
Mapping the Perceptual Space of Picture Quality

Zhenqiang Ying1*, Haoran Niu1*, Praful Gupta1, Dhruv Mahajan2, Deepti Ghadiyaram2+, Alan Bovik1+,

*+Equal contribution
1 University of Texas at Austin 2 Facebook AI
This project was supported and funded by Facebook AI.

ABSTRACT

Blind or no-reference (NR) perceptual picture quality prediction is a difficult, unsolved problem of great consequence to the social and streaming media industries that impacts billions of viewers daily. Unfortunately, popular NR prediction models perform poorly on real-world distorted pictures. To advance progress on this problem, we introduce the largest (by far) subjective picture quality database, containing about 40000 real-world distorted pictures and 120000 patches, on which we collected about 4M human judgments of picture quality. Using these picture and patch quality labels, we built deep region-based architectures that learn to produce state-of-the-art global picture quality predictions as well as useful local picture quality maps. Our innovations include picture quality prediction architectures that produce global-to-local inferences as well as local-to-global inferences (via feedback).







APPLICATIONS

Apply on videos in a frame-by-frame manner

DEMO

Powered by Google App Engine




Click ►




Poor quaility

High quality


DOWNLOAD

Please submit an issue here if you encounter any difficulties.

Database
  • 4 M ratings

  • 8 k subjects

  • 40 k images

  • 120 k patches

Download
Code
  • Pretrained Models

  • Reproducible Results

  • Training/Testing Utils

  • Open Demo In Colab

Download
Paper
  • Available on ArXiv

  • Accepted to CVPR 2020

Download