HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

UGC-VQA: Benchmarking Blind Video Quality Assessment for User Generated Content

Zhengzhong Tu Yilin Wang Neil Birkbeck Balu Adsumilli Alan C. Bovik

UGC-VQA: Benchmarking Blind Video Quality Assessment for User Generated Content

Abstract

Recent years have witnessed an explosion of user-generated content (UGC) videos shared and streamed over the Internet, thanks to the evolution of affordable and reliable consumer capture devices, and the tremendous popularity of social media platforms. Accordingly, there is a great need for accurate video quality assessment (VQA) models for UGC/consumer videos to monitor, control, and optimize this vast content. Blind quality prediction of in-the-wild videos is quite challenging, since the quality degradations of UGC content are unpredictable, complicated, and often commingled. Here we contribute to advancing the UGC-VQA problem by conducting a comprehensive evaluation of leading no-reference/blind VQA (BVQA) features and models on a fixed evaluation architecture, yielding new empirical insights on both subjective video quality studies and VQA model design. By employing a feature selection strategy on top of leading VQA model features, we are able to extract 60 of the 763 statistical features used by the leading models to create a new fusion-based BVQA model, which we dub the \textbf{VID}eo quality \textbf{EVAL}uator (VIDEVAL), that effectively balances the trade-off between VQA performance and efficiency. Our experimental results show that VIDEVAL achieves state-of-the-art performance at considerably lower computational cost than other leading models. Our study protocol also defines a reliable benchmark for the UGC-VQA problem, which we believe will facilitate further research on deep learning-based VQA modeling, as well as perceptually-optimized efficient UGC video processing, transcoding, and streaming. To promote reproducible research and public evaluation, an implementation of VIDEVAL has been made available online: \url{https://github.com/tu184044109/VIDEVAL_release}.

Code Repositories

vztu/VIDEVAL
Mentioned in GitHub
tu184044109/BVQA_Benchmark
pytorch
Mentioned in GitHub
vztu/BVQA_Benchmark
pytorch
Mentioned in GitHub
vztu/VIDEVAL_release
Mentioned in GitHub
tu184044109/VIDEVAL_release
Official
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
video-quality-assessment-on-konvid-1kVIDEVAL
PLCC: 0.7803
video-quality-assessment-on-live-fb-lsvqVIDEVAL
PLCC: 0.783
video-quality-assessment-on-live-vqcVIDEVAL
PLCC: 0.7514
video-quality-assessment-on-msu-video-qualityVIDEVAL
KLCC: 0.5414
PLCC: 0.7717
SRCC: 0.7286
Type: NR
video-quality-assessment-on-youtube-ugcVIDEVAL
PLCC: 0.7733

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
UGC-VQA: Benchmarking Blind Video Quality Assessment for User Generated Content | Papers | HyperAI