HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

No-Reference Image Quality Assessment via Transformers, Relative Ranking, and Self-Consistency

S. Alireza Golestaneh Saba Dadsetan Kris M. Kitani

No-Reference Image Quality Assessment via Transformers, Relative Ranking, and Self-Consistency

Abstract

The goal of No-Reference Image Quality Assessment (NR-IQA) is to estimate the perceptual image quality in accordance with subjective evaluations, it is a complex and unsolved problem due to the absence of the pristine reference image. In this paper, we propose a novel model to address the NR-IQA task by leveraging a hybrid approach that benefits from Convolutional Neural Networks (CNNs) and self-attention mechanism in Transformers to extract both local and non-local features from the input image. We capture local structure information of the image via CNNs, then to circumvent the locality bias among the extracted CNNs features and obtain a non-local representation of the image, we utilize Transformers on the extracted features where we model them as a sequential input to the Transformer model. Furthermore, to improve the monotonicity correlation between the subjective and objective scores, we utilize the relative distance information among the images within each batch and enforce the relative ranking among them. Last but not least, we observe that the performance of NR-IQA models degrades when we apply equivariant transformations (e.g. horizontal flipping) to the inputs. Therefore, we propose a method that leverages self-consistency as a source of self-supervision to improve the robustness of NRIQA models. Specifically, we enforce self-consistency between the outputs of our quality assessment model for each image and its transformation (horizontally flipped) to utilize the rich self-supervisory information and reduce the uncertainty of the model. To demonstrate the effectiveness of our work, we evaluate it on seven standard IQA datasets (both synthetic and authentic) and show that our model achieves state-of-the-art results on various datasets.

Code Repositories

isalirezag/tres
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
no-reference-image-quality-assessment-onTReS
PLCC: 0.883
SRCC: 0.863
no-reference-image-quality-assessment-on-1TReS
PLCC: 0.858
SRCC: 0.859
no-reference-image-quality-assessment-on-csiqTReS
PLCC: 0.942
SRCC: 0.922
video-quality-assessment-on-msu-sr-qa-datasetTReS trained on FLIVE
KLCC: 0.39398
PLCC: 0.50005
SROCC: 0.48882
Type: NR
video-quality-assessment-on-msu-sr-qa-datasetTReS trained on KONIQ
KLCC: 0.49004
PLCC: 0.56226
SROCC: 0.62578
Type: NR
video-quality-assessment-on-msu-sr-qa-datasetTReS
KLCC: 0.48901
PLCC: 0.56277
SROCC: 0.62496
Type: NR

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
No-Reference Image Quality Assessment via Transformers, Relative Ranking, and Self-Consistency | Papers | HyperAI