深度学习笔记
白天
夜间
首页
下载
阅读记录
书签管理
我的书签
添加书签
移除书签
模型优化-NPAS:A Compiler-aware Framework of Unified Network Pruning and Architecture Search for Beyond Real-Time Mobile Acceleration
浏览
74
扫码
分享
2022-07-12 23:12:14
若有收获,就点个赞吧
0 人点赞
上一篇:
下一篇:
论文笔记-HALP: hardware-aware latency pruning
模型优化-NVIT: VISION TRANSFORMER COMPRESSION AND PARAMETER REDISTRIBUTION�
模型优化-LITE TRANSFORMER WITH LONG-SHORT RANGE ATTENTION
Transformer - MOBILEVIT: LIGHT-WEIGHT, GENERAL-PURPOSE, AND MOBILE-FRIENDLY VISION TRANSFORMER �
模型优化-Chasing Sparsity in Vision Transformers: An End-to-End Exploration
域适应-CDTRANS: CROSS-DOMAIN TRANSFORMER FOR UNSUPERVISED DOMAIN ADAPTATION
域适应-Deep Transfer Network: Unsupervised Domain Adaptation
模型优化-Multi-objective Magnitude-Based Pruning for Latency-Aware Deep Neural Network Compression
模型优化-Latency-aware automatic CNN channel pruning with GPU runtime analysis
模型压缩笔记
单目深度估计-Domain Decluttering: Simplifying Images to Mitigate Synthetic-Real Domain Shift and Improve Depth Estimation�
单目深度估计-T2Net:Synthetic-to-Realistic Translation for Solving Single-Image Depth Estimation Tasks
Attention is All You Need
风格迁移-MUNIT
模型优化-transformer
模型优化-Manifold Regularized Dynamic Network Pruning
图像匹配-Learnable Motion Coherence for Correspondence Pruning
模型优化-NPAS:A Compiler-aware Framework of Unified Network Pruning and Architecture Search for Beyond Real-Time Mobile Acceleration
单目深度估计-NeW CRFs: Neural Window Fully-connected CRFs for Monocular Depth Estimation
模型优化-Network Pruning via Performance Maximization
论文笔记-Ps and Qs:quantization-aware pruning for efficient low latency neural network inference
暂无相关搜索结果!
让时间为你证明
分享,让知识传承更久远
×
文章二维码
×
手机扫一扫,轻松掌上读
文档下载
×
请下载您需要的格式的文档,随时随地,享受汲取知识的乐趣!
PDF
文档
EPUB
文档
MOBI
文档
书签列表
×
阅读记录
×
阅读进度:
0.00%
(
0/0
)
重置阅读进度
×
思维导图备注