Command Palette
Search for a command to run...
LYT-NET: Lightweight YUV Transformer-based Network for Low-light Image Enhancement
Brateanu A. ; Balmez R. ; Avram A. ; Orhei C. ; Ancuti C.

Abstract
This letter introduces LYT-Net, a novel lightweight transformer-based modelfor low-light image enhancement (LLIE). LYT-Net consists of several layers anddetachable blocks, including our novel blocks--Channel-Wise Denoiser (CWD) andMulti-Stage Squeeze & Excite Fusion (MSEF)--along with the traditionalTransformer block, Multi-Headed Self-Attention (MHSA). In our method we adopt adual-path approach, treating chrominance channels U and V and luminance channelY as separate entities to help the model better handle illumination adjustmentand corruption restoration. Our comprehensive evaluation on established LLIEdatasets demonstrates that, despite its low complexity, our model outperformsrecent LLIE methods. The source code and pre-trained models are available athttps://github.com/albrateanu/LYT-Net
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| low-light-image-enhancement-on-lol | LYT-Net | Average PSNR: 27.23 FLOPS (G): 3.49 LPIPS: 0.071 Params (M): 0.045 SSIM: 0.853 |
| low-light-image-enhancement-on-lolv2 | LYT-Net | Average PSNR: 27.80 LPIPS: 0.078 SSIM: 0.873 |
| low-light-image-enhancement-on-lolv2-1 | LYT-Net | Average PSNR: 29.38 LPIPS: 0.037 SSIM: 0.939 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.