概况

  • CVPR2019仅2篇口头报告:
    • Video Saliency:Shifting More Attention to Video Salient Object Detection
    • Co-saliency: Deep Instance Co-segmentation by Co-peak Search and Co-saliency Detection
  • CVPR 2019共录用19篇saliency相关论文,直接相关的17篇
    • object-level /image based salency 10篇
    • instance-level salency 2篇
    • co-saliency 2篇
    • video-saliency 1篇
    • RGBD saliency 1篇
    • fixation saliency 1篇
    • Stereoscopic Video saliency 1篇
  • 作者分布统计:
    • Ming-Ming Cheng 5篇
    • Huchuan Lu老师4篇
    • Jianbing Shen 3篇
    • Wenguan Wang 3篇
    • Deng-Ping Fan 2篇
    • Ali Borji 2篇
    • Jianmin Jiang 2篇
    • Mengyang Feng 2篇(Lu老师的博士)

Attentive Feedback Network for Boundary-Aware Salient Object Detection

Mengyang Feng (Dalian University of Technology); Huchuan Lu (Dalian University of Technology)*; Errui Ding (Baidu Inc.)

  • 问题清晰明确,重点解决边界模糊的问题
  • 使用最大池化实现了膨胀腐蚀操作,从而得到边界注意力图
  • 边界增强损失(Boundary Enhanced Loss)
  • 新的全局感知模块,使用分块堆叠后的卷积实现全局视野

语雀内容

CapSal: Leveraging Captioning to Boost Semantics for Salient Object Detection

Lu Zhang (Dalian University of Technology); Huchuan Lu (Dalian University of Technology)*; Zhe Lin (Adobe Research); Jianming Zhang (Adobe Research); You He (Naval Aviation University)

Multi-source weak supervision for saliency detection

Yu Zeng (Dalian University of Technology)*; Huchuan Lu (Dalian University of Technology); Lihe Zhang (Dalian University of Technology); Yunzhi Zhuge (Dalian University of Technology); Mingyang Qian (Dalian University of Technology); Yizhou Yu (Deepwise AI Lab)

A Mutual Learning Method for Salient Object Detection with intertwined Multi-Supervision

Runmin Wu (Dalian University of Technology ); Mengyang Feng (Dalian University of Technology); Wenlong Guan (Dalian University of Technology); Dong Wang (Dalian University of Technology); Huchuan Lu (Dalian University of Technology)*; Errui Ding (Baidu Inc.)

  • 使用多任务学习策略:显著性监督、前景轮廓监督、边缘监督
  • 使用显著性检测与边缘检测任务来互相引导增强效果
  • 使用了互学习策略,来使得网络参数可以收敛到更好的局部极小值提升性能
  • 使用交替式的显著性目标检测和前景轮廓检测的监督,使得网络可以产生更均匀的高亮和更好的边缘。

语雀内容

A Simple Pooling-Based Design for Real-Time Salient Object Detection

Jiang-Jiang Liu (Nankai University)*; Qibin Hou (Nankai University); Ming-Ming Cheng (Nankai University); Jiashi Feng (NUS); Jianmin Jiang (Shenzhen University)

  • 频繁使用池化技术以获得多尺度感受野
    • 池化金字塔获得多尺度信息
    • 使用嵌入解码器路径中的池化金字塔来聚合数据
  • 使用边缘数据集和显著性数据及联合训练,边缘加强
  • 总体效果非常好

语雀内容

Contrast Prior and Fluid Pyramid Integration for RGBD Salient Object Detection

Jiaxing Zhao (Nankai University); Yang Cao (Nankai University); Deng-Ping Fan (Nankai University); Xuan-Yi Li (Nankai University); Le Zhang (Institute for Infocomm Research,Agency for Science, Technology and Research (ASTAR)); Ming-Ming Cheng (Nankai University)

Shifting More Attention to Video Salient Object Detection

Deng-Ping Fan (Nankai University); Wenguan Wang (Inception Institute of Artificial Intelligence); Ming-Ming Cheng (Nankai University)*; Jianbing Shen (Beijing Institute of Technology)

语雀内容

An Iterative and Cooperative Top-down and Bottom-up Inference Network for Salient Object Detection

Wenguan Wang (Inception Institute of Artificial Intelligence); Jianbing Shen (Beijing Institute of Technology)*; Ming-Ming Cheng (Nankai University); Ling Shao (Inception Institute of Artificial Intelligence)

  • 迭代式的top-down和bottom-up显著性推理过程。
  • 每一步中使用卷积版本的RNN、LSTM或GRU来构建推理层并实现侧输出,配合深监督训练策略。
  • 整体可以看作是一个通用的框架,很多的基于FCN的模型可以认为是该模型的变体。
  • 参数共享与权重共享的各种改进变体,在损失一定的精度的同时可以降低参数量。

语雀内容

S4Net: Single Stage Salient-Instance Segmentation

Ruochen Fan (Tsinghua University); Ming-Ming Cheng (Nankai University)*; Qibin Hou (Nankai University); Tai-Jiang Mu (Tsinghua University); Jingdong Wang (Microsoft Research); Shimin Hu (Tsinghua University)

PAGE-Net: Salient Object Detection with Pyramid Attention and Salient Edge

Wenguan Wang (Inception Institute of Artificial Intelligence); Shuyang Zhao (Beijing Institute of Technology ); Jianbing Shen (Beijing Institute of Technology)*; Steven Hoi (SMU); Ali Borji (University of Central Florida)

  • 金字塔注意力模块
  • 显著性边缘检测模块使用显著性真值膨胀后得到的边缘监督
  • 融合多级预测进行最终输出

语雀内容

Understanding and Visualizing Deep Visual Saliency Models

fixation saliency Sen He (University of Exeter)*; Hamed Rezazadegan Tavakoli (Aalto University); Ali Borji (University of Central Florida); Yang Mi (University of Exeter); Nicolas Pugeault (Exeter)

Pyramid Feature Selective Network for Saliency detection

Ting Zhao (Harbin Institute of Technology, China)*; XIANGQIAN WU (Harbin Institute of Technology, China)

  • 类ASPP结构
  • 在早期特征使用位置注意力,较深的特征使用了通道注意力
  • 梯度算子获取边界并边界损失函数
  • 数据增强

语雀内容

Co-saliency Detection via Mask-guided Fully Convolutional Networks with Multi-scale Label Smoothing

Kaihua Zhang (NUIST)*; Tengpeng Li (NUIST); Bo Liu (Rutgers University); Qingshan Liu (Nanjing University of Information Science & Technology)

Cascaded Partial Decoder for Fast and Accurate Salient Object Detection

Zhe Wu (University of Chinese Academy of Sciences)*; Li Su (University of Chinese Academy of Sciences); Qingming Huang (University of Chinese Academy of Sciences)

  • 分叉式backbone结构
  • 双解码器分支:一路注意力,一路预测
  • 速度快,效果不差

语雀内容

BASNet: Boundary Aware Salient Object Detection

Xuebin Qin (University of Alberta)*; Zichen Zhang (University of Alberta); Chenyang Huang (University of Alberta); Chao Gao (University of Alberta); Masood Dehghan (University of Alberta); Martin Jagersand (University of Alberta)

  • 使用BCE、SSIM(结构相似性)损失、IOU损失加和作为最终损失,实现像素级、区域级、图像级监督(效果很好)

语雀内容

Deep Instance Co-segmentation by Co-peak Search and Co-saliency Detection

Kuang-Jui Hsu (Academia Sinica)*; Yen-Yu Lin (Academia Sinica); Yung-Yu Chuang (National Taiwan University)

Learning to Explore Intrinsic Saliency for Stereoscopic Video

Qiudan ZHANG (City University of Hong Kong); Xu Wang (Shenzhen University)*; Shiqi Wang (CityU); Shikai LI (Shenzhen University); Sam Kwong (City Univeristy of Hong Kong); Jianmin Jiang (Shenzhen University)

Scene Categorization from Contours: Medial Axis Based Salience Measures

Morteza Rezanejad (Mcgill university )*; Gabriel Downs (McGill University); John Wilder (University of Toronto); Dirk Bernhardt-Walther (University of Toronto); Sven Dickinson (University of Toronto); Allan Jepson (Samsung); Kaleem Siddiqi (McGill University)

  • scene categorization of line drawings derived from popular databases

The computer vision community has witnessed recent advances in scene categorization from images, with the state-of-the art systems now achieving impressive recognition rates on challenging benchmarks such as the Places365 dataset. Such systems have been trained on photographs which include color, texture and shading cues. The geometry of shapes and surfaces, as conveyed by scene contours, is not explicitly considered for this task. Remarkably, humans can accurately recognize natural scenes from line drawings, which consist solely of contour-based shape cues.

Here we report the first computer vision study on scene categorization of line drawings derived from popular databases including an artist scene database, MIT67 and Places365.

Specifically, we use off-the-shelf pre-trained CNNs to perform scene classification given only contour information as input, and find performance levels well above chance. We also show that medial-axis based contour salience methods can be used to select more informative subsets of contour pixels, and that the variation in CNN classification performance on various choices for these subsets is qualitatively similar to that observed in human performance. Moreover, when the salience measures are used to weight the contours, as opposed to pruning them, we find that these weights boost our CNN performance above that for unweighted contour input. That is, the medial axis based salience weights appear to add useful information that is not available when CNNs are trained to use contours alone.

计算机视觉社区见证了图像场景分类的最新进展,最先进的系统现在在挑战性的基准 (如 Places365 数据集) 上获得了令人印象深刻的识别率。这些系统已经在包括颜色、纹理和阴影线索的照片上进行了训练场景轮廓传达的形状和表面的几何形状在这项任务中没有被明确考虑。值得注意的是,人类可以从仅仅由基于轮廓的形状线索组成的线条图中准确识别自然场景。

在这里,我们第一个报告了这样一个计算机视觉研究,研究了来自流行数据库 (包括艺术家场景数据库、 MIT67 和 places365) 的轮廓信息输入的场景分类任务。具体来说,我们使用现成的预先训练的 CNNs 来进行场景分类,只给出轮廓信息作为输入,并找到远远高于机会的性能水平。我们还表明,基于中轴的轮廓显著性方法可以用来选择轮廓像素的更多信息子集, CNN 分类性能在这些子集的各种选择上的变化与在人类性能中观察到的变化在质量上相似。此外,当显著性度量用于对轮廓进行加权,而不是修剪轮廓时,我们发现这些权重提高了 CNN 在未加权轮廓输入方面的性能。也就是说,基于中轴的显著性权重似乎添加了当 CNNs 被训练单独使用轮廓时不可获得的有用信息

Keywords: Scene Categorization, Line Drawings, Perceptual Grouping, Medial Axis, Contour Salience, Contour Symmetry, Contour Separation

Few-Shot Learning via Saliency-guided Hallucination of Samples

Hongguang Zhang (Australian National University)*; Jing Zhang (Australian National University); Piotr Koniusz (Data61/CSIRO, ANU)

参考链接