- (ICCV 2019) Learning Lightweight Lane Detection CNNs by Self Attention Distillation
- (CVPR 2022) Cross-Image Relational Knowledge Distillation for Semantic Segmentation
- (TIP 2022) Spot-adaptive Knowledge Distillation
- (TIP 2021) Double Similarity Distillation for Semantic Image Segmentation
- (CVPR 2019) Structured Knowledge Distillation for Semantic Segmentation
- (CVPR 2019) Relational Knowledge Distillation
- (CVPR 2019) Knowledge Adaptation for Efficient Semantic Segmentation
- (ICCV 2019) Similarity-Preserving Knowledge Distillation
- (ICCV 2019) Distilling Knowledge From a Deep Pose Regressor Network
- (BMVC 2019) Graph-based Knowledge Distillation by Multi-head Attention Network
- 简单总结
- (NeurIPS 2018) Paraphrasing Complex Network: Network Compression via Factor Transfer
- (CVPR 2018) Deep Mutual Learning
- (NIPS 2017) Learning Efficient Object Detection Models with Knowledge Distillation
- (Arxiv 2017) Data Distillation: Towards Omni-Supervised Learning
- (Arxiv 2017) Like What You Like - Knowledge Distill via Neuron Selectivity Transfer
- (CVPR 2017) A Gift from Knowledge Distillation: Fast Optimization, Network Minimization and Transfer Learning
- (CVPR 2017) Mimicking Very Efficient Network for Object Detection
- (ICLR 2017) Paying More Attention to Attention
- (ICLR 2015) FITNETS: HINTS FOR THIN DEEP NETS