HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

LYT-NET: Lightweight YUV Transformer-based Network for Low-light Image Enhancement

Brateanu A. ; Balmez R. ; Avram A. ; Orhei C. ; Ancuti C.

LYT-NET: Lightweight YUV Transformer-based Network for Low-light Image
  Enhancement

Abstract

This letter introduces LYT-Net, a novel lightweight transformer-based modelfor low-light image enhancement (LLIE). LYT-Net consists of several layers anddetachable blocks, including our novel blocks--Channel-Wise Denoiser (CWD) andMulti-Stage Squeeze & Excite Fusion (MSEF)--along with the traditionalTransformer block, Multi-Headed Self-Attention (MHSA). In our method we adopt adual-path approach, treating chrominance channels U and V and luminance channelY as separate entities to help the model better handle illumination adjustmentand corruption restoration. Our comprehensive evaluation on established LLIEdatasets demonstrates that, despite its low complexity, our model outperformsrecent LLIE methods. The source code and pre-trained models are available athttps://github.com/albrateanu/LYT-Net

Code Repositories

albrateanu/lyt-net
Official
tf
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
low-light-image-enhancement-on-lolLYT-Net
Average PSNR: 27.23
FLOPS (G): 3.49
LPIPS: 0.071
Params (M): 0.045
SSIM: 0.853
low-light-image-enhancement-on-lolv2LYT-Net
Average PSNR: 27.80
LPIPS: 0.078
SSIM: 0.873
low-light-image-enhancement-on-lolv2-1LYT-Net
Average PSNR: 29.38
LPIPS: 0.037
SSIM: 0.939

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp