深度学习模型压缩
白天
夜间
首页
下载
阅读记录
书签管理
我的书签
添加书签
移除书签
Compact Model
浏览
96
扫码
分享
2022-07-12 22:45:55
ENet
若有收获,就点个赞吧
0 人点赞
上一篇:
下一篇:
相关资料
相关资料
A Survey of Model Compression and Acceleration for Deep Neural Networks
Knowledge Distillation
简单总结
(ICLR 2015) FITNETS: HINTS FOR THIN DEEP NETS
(ICLR 2017) Paying More Attention to Attention
(CVPR 2017) Mimicking Very Efficient Network for Object Detection
(CVPR 2017) A Gift from Knowledge Distillation: Fast Optimization, Network Minimization and Transfer Learning
(Arxiv 2017) Like What You Like - Knowledge Distill via Neuron Selectivity Transfer
(Arxiv 2017) Data Distillation: Towards Omni-Supervised Learning
(NIPS 2017) Learning Efficient Object Detection Models with Knowledge Distillation
(CVPR 2018) Deep Mutual Learning
(NeurIPS 2018) Paraphrasing Complex Network: Network Compression via Factor Transfer
(ICCV 2019) Learning Lightweight Lane Detection CNNs by Self Attention Distillation
(BMVC 2019) Graph-based Knowledge Distillation by Multi-head Attention Network
(ICCV 2019) Distilling Knowledge From a Deep Pose Regressor Network
(ICCV 2019) Similarity-Preserving Knowledge Distillation
(CVPR 2019) Knowledge Adaptation for Efficient Semantic Segmentation
(CVPR 2019) Relational Knowledge Distillation
(CVPR 2019) Structured Knowledge Distillation for Semantic Segmentation
(TIP 2021) Double Similarity Distillation for Semantic Image Segmentation
(TIP 2022) Spot-adaptive Knowledge Distillation
(CVPR 2022) Cross-Image Relational Knowledge Distillation for Semantic Segmentation
Pruning
CondenseNet: An Efficient DenseNet using Learned Group Convolutions
Compact Model
ENet
暂无相关搜索结果!
让时间为你证明
分享,让知识传承更久远
×
文章二维码
×
手机扫一扫,轻松掌上读
文档下载
×
请下载您需要的格式的文档,随时随地,享受汲取知识的乐趣!
PDF
文档
EPUB
文档
MOBI
文档
书签列表
×
阅读记录
×
阅读进度:
0.00%
(
0/0
)
重置阅读进度
×
思维导图备注