Revisiting Saliency Metrics: Farthest-Neighbor Area Under Curve
    Sen Jia, Neil D. B. Bruce
    [pdf] [supp] [bibtex]

    在本文中,我们提出了一种新的度量标准,以解决显着性评估中长期存在的中心偏差问题。 我们首先表明,由于标准偏差的选择含糊不清,尤其是对于卷积神经网络,基于分布的度量无法测量数据集之间的显着性性能。因此,我们提出的度量是基于AUC的,因为ROC曲线对于标准偏差问题相对较强。但是,这需要在显着性预测中具有足够的唯一值以计算AUC分数。其次,针对预测显着性输出中的几个价值度问题,我们提出了一个全局平滑函数。与随机噪声相比,我们的平滑函数可以创建唯一的值,而不会失去现有的相对显着性关系。最后,我们证明了我们提出的基于AUC的度量可以生成更具方向性的负评估集,称为Farthest-Neighbor AUC(FN-AUC)。我们的实验表明,与S-AUC相比,FN-AUC可以更有效地测量中心和外围的空间偏差,而不会影响固定位置。

    改进指标的工作,关注的是眼动数据的显著性评估。
    Select, Supplement and Focus for RGB-D Saliency Detection
    Miao Zhang, Weisong Ren, Yongri Piao, Zhengkun Rong, Huchuan Lu
    [pdf] [bibtex]

    Depth data containing a preponderance of discriminative power in location have been proven beneficial for accurate saliency prediction. However, RGB-D saliency detection methods are also negatively influenced by randomly distributed erroneous or missing regions on the depth mapor along the object boundaries. This offers the possibility of achieving more effective inference by well designed models. In this paper, we propose a new framework for accurate RGB-D saliency detection taking account of global location and local detail complementarities from two modalities. This is achieved by designing a complimentary interaction module (CIM) to discriminatively select useful representation from the RGB and depth data, and effectively integrate cross-modal features. Benefiting from the proposed CIM,the fused features can accurately locate salient objects with fine edge details. Moreover, we propose a compensation-aware loss to improve the network’s confidence in detecting hard samples. Comprehensive experiments on six public datasets demonstrate that our method outperforms 18 state-of-the-art methods.

    现有RGBD SOD的问题:Depth maps captured in real-life scenarios pose huge challenges to accurate RGB-D saliency detection in terms of two aspects.

    • First, randomly distributed erroneous or missing regions areintroduced on the depth map [Bi-stream pose guidedregion ensemble network for fingertip localization fromstereo images]. This is usually produced from sensors, absorption, or poor reflection
    • Second, erroneous depth measurements occur predominantly near object boundaries [Contrast prior and fluid pyramid integration for rgbd salient object detection]. This is usually caused by the imaging principles.

    文章的解决办法:In this work, we strive to embrace challenges towards accurate RGB-D saliency detection.

    • The primary challenge towards this goal is in the design of a model that is discriminative enough to simultaneously reason about useful representation from the RGB-D data for cross-modal complements.
    • The second challenge is in the design of the loss that has high confidence in the hard samples of the unreliable depth maps, leading to inaccurate and blurry predictions.

    Different from the aforementioned (即现有的RGB-D SOD方法) methods, our work takes negative impacts caused by unreliable depth maps into account, and strives to exploit useful and precise information for cross-modal fusion.
    所做的都是在处理如何在可能比较差的depth数据的辅助下,获得更好的预测效果。整体行文目标还是比较明确的。文章中比较有意思的一点是,考虑使用不同阈值划分depth image得到的多个二值mask,用来对特征和损失进行加权计算,这倒是在一定程度上考虑了depth信息的模糊性。
    How Much Time Do You Have? Modeling Multi-Duration Saliency
    Camilo Fosco, Anelise Newman, Pat Sukhum, Yun Bin Zhang, Nanxuan Zhao, Aude Oliva, Zoya Bylinskii
    [pdf] [supp] [bibtex]

    STAViS: Spatio-Temporal AudioVisual Saliency Network
    Antigoni Tsiami, Petros Koutras, Petros Maragos
    [pdf] [supp] [bibtex]

    UC-Net: Uncertainty Inspired RGB-D Saliency Detection via Conditional Variational Autoencoders
    Jing Zhang, Deng-Ping Fan, Yuchao Dai, Saeed Anwar, Fatemeh Sadat Saleh, Tong Zhang, Nick Barnes
    [pdf] [bibtex]

    There and Back Again: Revisiting Backpropagation Saliency Methods
    Sylvestre-Alvise Rebuffi, Ruth Fong, Xu Ji, Andrea Vedaldi
    [pdf] [supp] [bibtex]

    Adaptive Graph Convolutional Network With Attention Graph Clustering for Co-Saliency Detection
    Kaihua Zhang, Tengpeng Li, Shiwen Shen, Bo Liu, Jin Chen, Qingshan Liu
    [pdf] [bibtex]

    Interactive Two-Stream Decoder for Accurate and Fast Saliency Detection
    Huajun Zhou, Xiaohua Xie, Jian-Huang Lai, Zixuan Chen, Lingxiao Yang
    [pdf] [bibtex]

    Learning Saliency Propagation for Semi-Supervised Instance Segmentation
    Yanzhao Zhou, Xin Wang, Jianbin Jiao, Trevor Darrell, Fisher Yu
    [pdf] [bibtex]

    Inferring Attention Shift Ranks of Objects for Image Saliency
    Avishek Siris, Jianbo Jiao, Gary K.L. Tam, Xianghua Xie, Rynson W.H. Lau
    [pdf] [supp] [bibtex]

    Learning Selective Self-Mutual Attention for RGB-D Saliency Detection
    Nian Liu, Ni Zhang, Junwei Han
    [pdf] [bibtex]

    Taking a Deeper Look at Co-Salient Object Detection
    Deng-Ping Fan, Zheng Lin, Ge-Peng Ji, Dingwen Zhang, Huazhu Fu, Ming-Ming Cheng
    [pdf] [supp] [bibtex]

    JL-DCF: Joint Learning and Densely-Cooperative Fusion Framework for RGB-D Salient Object Detection
    Keren Fu, Deng-Ping Fan, Ge-Peng Ji, Qijun Zhao
    [pdf] [bibtex]

    BachGAN: High-Resolution Image Synthesis From Salient Object Layout
    Yandong Li, Yu Cheng, Zhe Gan, Licheng Yu, Liqiang Wang, Jingjing Liu
    [pdf] [bibtex]

    A2dele: Adaptive and Attentive Depth Distiller for Efficient RGB-D Salient Object Detection
    Yongri Piao, Zhengkun Rong, Miao Zhang, Weisong Ren, Huchuan Lu
    [pdf] [bibtex]

    Multi-Scale Interactive Network for Salient Object Detection
    Youwei Pang, Xiaoqi Zhao, Lihe Zhang, Huchuan Lu
    [pdf] [bibtex]

    Weakly-Supervised Salient Object Detection via Scribble Annotations
    Jing Zhang, Xin Yu, Aixuan Li, Peipei Song, Bowen Liu, Yuchao Dai
    [pdf] [bibtex]

    Label Decoupling Framework for Salient Object Detection
    Jun Wei, Shuhui Wang, Zhe Wu, Chi Su, Qingming Huang, Qi Tian
    [pdf] [bibtex]