HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Attention-based View Selection Networks for Light-field Disparity Estimation

{Yung-Yu Chuang Yu-Lun Liu Yu-Ju Tsai Ming Ouhyoung}

Abstract

This paper introduces a novel deep network for estimating depth maps from a light field image. For utilizing the views more effectively and reducing redundancy within views, we propose a view selection module that generates an attention map indicating the importance of each view and its potential for contributing to accurate depth estimation. By exploring the symmetric property of light field views, we enforce symmetry in the attention map and further improve accuracy. With the attention map, our architecture utilizes all views more effectively and efficiently. Experiments show that the proposed method achieves state-of-the-art performance in terms of accuracy and ranks the first on a popular benchmark for disparity estimation for light field images.

Benchmarks

BenchmarkMethodologyMetrics
depth-estimation-on-4d-light-field-datasetLFattNet
BadPix(0.01): 17.226
BadPix(0.03): 6.823
BadPix(0.07): 3.756
MSE : 1.904

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Attention-based View Selection Networks for Light-field Disparity Estimation | Papers | HyperAI