HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

F3Net: Fusion, Feedback and Focus for Salient Object Detection

Wei Jun ; Wang Shuhui ; Huang Qingming

F3Net: Fusion, Feedback and Focus for Salient Object Detection

Abstract

Most of existing salient object detection models have achieved great progressby aggregating multi-level features extracted from convolutional neuralnetworks. However, because of the different receptive fields of differentconvolutional layers, there exists big differences between features generatedby these layers. Common feature fusion strategies (addition or concatenation)ignore these differences and may cause suboptimal solutions. In this paper, wepropose the F3Net to solve above problem, which mainly consists of crossfeature module (CFM) and cascaded feedback decoder (CFD) trained by minimizinga new pixel position aware loss (PPA). Specifically, CFM aims to selectivelyaggregate multi-level features. Different from addition and concatenation, CFMadaptively selects complementary components from input features before fusion,which can effectively avoid introducing too much redundant information that maydestroy the original features. Besides, CFD adopts a multi-stage feedbackmechanism, where features closed to supervision will be introduced to theoutput of previous layers to supplement them and eliminate the differencesbetween features. These refined features will go through multiple similariterations before generating the final saliency maps. Furthermore, differentfrom binary cross entropy, the proposed PPA loss doesn't treat pixels equally,which can synthesize the local structure information of a pixel to guide thenetwork to focus more on local details. Hard pixels from boundaries orerror-prone parts will be given more attention to emphasize their importance.F3Net is able to segment salient object regions accurately and provide clearlocal details. Comprehensive experiments on five benchmark datasets demonstratethat F3Net outperforms state-of-the-art approaches on six evaluation metrics.

Code Repositories

weijun88/F3Net
Official
pytorch
Mentioned in GitHub
PanoAsh/SHD360
pytorch
Mentioned in GitHub
PanoAsh/ASOD60K
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
camouflaged-object-segmentation-on-pcod-1200F3Net
S-Measure: 0.885
dichotomous-image-segmentation-on-dis-te1F3Net
E-measure: 0.783
HCE: 244
MAE: 0.095
S-Measure: 0.721
max F-Measure: 0.640
weighted F-measure: 0.549
dichotomous-image-segmentation-on-dis-te2F3Net
E-measure: 0.820
HCE: 542
MAE: 0.097
S-Measure: 0.755
max F-Measure: 0.712
weighted F-measure: 0.620
dichotomous-image-segmentation-on-dis-te3F3Net
E-measure: 0.848
HCE: 1059
MAE: 0.092
S-Measure: 0.773
max F-Measure: 0.743
weighted F-measure: 0.656
dichotomous-image-segmentation-on-dis-te4F3Net
E-measure: 0.825
HCE: 3760
MAE: 0.107
S-Measure: 0.752
max F-Measure: 0.721
weighted F-measure: 0.633
dichotomous-image-segmentation-on-dis-vdF3Net
E-measure: 0.800
HCE: 1567
MAE: 0.107
S-Measure: 0.733
max F-Measure: 0.685
weighted F-measure: 0.595
salient-object-detection-on-dut-omron-2F3Net
E-measure: 0.869
MAE: 0.052
S-measure: 0.838
max_F1: 0.813
salient-object-detection-on-duts-te-1F3Net
E-measure: 0.901
MAE: 0.035
Smeasure: 0.888
max_F1: 0.891
salient-object-detection-on-ecssd-1F3Net
E-measure: 0.927
MAE: 0.033
S-measure: 0.924
max_F1: 0.945
salient-object-detection-on-hku-is-1F3Net
E-measure: 0.952
MAE: 0.028
S-measure: 0.917
max_F1: 0.936
salient-object-detection-on-pascal-s-1F3Net
E-measure: 0.858
MAE: 0.061
S-measure: 0.854
max_F1: 0.871

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
F3Net: Fusion, Feedback and Focus for Salient Object Detection | Papers | HyperAI