Command Palette
Search for a command to run...
GCoNet+: A Stronger Group Collaborative Co-Salient Object Detector
GCoNet+: A Stronger Group Collaborative Co-Salient Object Detector
Peng Zheng, Huazhu Fu, Deng-Ping Fan†, Qi Fan, Jie Qin, Yu-Wing Tai, Chi-Keung Tang, and Luc Van Gool
Abstract
In this paper, we present a novel end-to-end group collaborative learningnetwork, termed GCoNet+, which can effectively and efficiently (250 fps)identify co-salient objects in natural scenes. The proposed GCoNet+ achievesthe new state-of-the-art performance for co-salient object detection (CoSOD)through mining consensus representations based on the following two essentialcriteria: 1) intra-group compactness to better formulate the consistency amongco-salient objects by capturing their inherent shared attributes using ournovel group affinity module (GAM); 2) inter-group separability to effectivelysuppress the influence of noisy objects on the output by introducing our newgroup collaborating module (GCM) conditioning on the inconsistent consensus. Tofurther improve the accuracy, we design a series of simple yet effectivecomponents as follows: i) a recurrent auxiliary classification module (RACM)promoting model learning at the semantic level; ii) a confidence enhancementmodule (CEM) assisting the model in improving the quality of the finalpredictions; and iii) a group-based symmetric triplet (GST) loss guiding themodel to learn more discriminative features. Extensive experiments on threechallenging benchmarks, i.e., CoCA, CoSOD3k, and CoSal2015, demonstrate thatour GCoNet+ outperforms the existing 12 cutting-edge models. Code has beenreleased at https://github.com/ZhengPeng7/GCoNet_plus.