- 步骤一. 常规训练。
- 步骤二. 转一个onnx出来,作为剪枝时网络拓扑图分析的基础
- 步骤三. 复制train.py 为 train_sparsity.py,并添加/修改代码。
- 对于 YOLOv5s, normlayer_names = [‘model.0.conv.bn’, ‘model.1.bn’, ‘model.2.m.0.cv1.bn’, ‘model.2.cv3.bn’, ‘model.3.bn’, ‘model.4.m.0.cv1.bn’, ‘model.4.m.1.cv1.bn’, ‘model.4.m.2.cv1.bn’, ‘model.5.bn’, ‘model.6.m.0.cv1.bn’, ‘model.6.m.1.cv1.bn’, ‘model.6.m.2.cv1.bn’, ‘model.7.bn’, ‘model.8.cv2.bn’, ‘model.9.cv1.bn’, ‘model.9.m.0.cv1.bn’, ‘model.9.cv3.bn’, ‘model.13.cv1.bn’, ‘model.13.m.0.cv1.bn’, ‘model.13.cv3.bn’, ‘model.17.cv1.bn’, ‘model.17.m.0.cv1.bn’, ‘model.17.cv3.bn’, ‘model.20.cv1.bn’, ‘model.20.m.0.cv1.bn’, ‘model.20.cv3.bn’, ‘model.23.cv1.bn’, ‘model.23.m.0.cv1.bn’, ‘model.23.cv3.bn’]
本文是使用进阶剪枝模式对YOLOv5进行剪枝的过程,相较常规网络的剪枝,YOLO剪枝的特殊性主要包括:
- YOLO中存在的SiLU等算子无法直接基于pytorch自带函数进行onnx转换:
——需要用YOLO自带工具(包含了算子替换等操作)转换onnx后,传入剪枝函数。
- YOLO中onnx转换工具自动合并了BN:
——需要改为不合并,因为剪枝工具需要BN。
- YOLO采用了Scaler方法处理梯度更新,对梯度乘了65535
——需要将GradDecay的稀疏函数update_layer_grad_decay 是否采用scaler进行稀疏约束的标志赋值为True,剪枝工具代码自动处理Scaler的情况
步骤一. 常规训练。
步骤二. 转一个onnx出来,作为剪枝时网络拓扑图分析的基础
步骤 2.1 修改load模型的代码,去掉合并BN操作。
进入models/experimental.py, 复制attempt_load函数为attempt_load_without_fuse函数,去掉加载模型时的fuse()调用:
def attempt_load_without_fuse(weights, map_location=None):
# Loads an ensemble of models weights=[a,b,c] or a single model weights=[a] or weights=a
model = Ensemble()
for w in weights if isinstance(weights, list) else [weights]:
attempt_download(w)
model.append(torch.load(w, map_location=map_location)['model'].float().eval()) # load FP32 model, 此处无 fuse()!
# Compatibility updates
for m in model.modules():
if type(m) in [nn.Hardswish, nn.LeakyReLU, nn.ReLU, nn.ReLU6, nn.SiLU]:
m.inplace = True # pytorch 1.7.0 compatibility
elif type(m) is Conv:
m._non_persistent_buffers_set = set() # pytorch 1.6.0 compatibility
if len(model) == 1:
return model[-1] # return model
else:
print('Ensemble created with %s\n' % weights)
for k in ['names', 'stride']:
setattr(model, k, getattr(model[-1], k))
return model # return ensemble
步骤2.2 转换不带BN的onnx,用于剪枝时的网络拓扑图分析。
复制export.py 为export_with_bn.py,并修改
from models.experimental import attempt_load
为
from models.experimental import attempt_load_without_fuse as attempt_load
并将常规训练好的pt转onnx,注意加上—train:
python export_with_bn.py --train --weights runs/train/exp/weights/best.pt
如果YOLO版本比较低,可能会报错没有—train的选项,则直接去export_with_bn中修改export函数,在参数中增加training和do_constant_folding两项:
torch.onnx.export(model, img, f, verbose=False, opset_version=11, input_names=['images'],
output_names=['classes', 'boxes'] if y is None else ['output'],
dynamic_axes={'images': {0: 'batch_size'}, 'output': {0: 'batch_size'}},training=torch.onnx.TrainingMode.TRAINING ,
do_constant_folding=False)
步骤三. 复制train.py 为 train_sparsity.py,并添加/修改代码。
(可不操作)取消ema更新:
# Optimize if ni % accumulate == 0: scaler.step(optimizer) # optimizer.step scaler.update() optimizer.zero_grad() if ema: pass #ema.update(model)
指定被剪枝的层,可以通过getprunelayer自动获取。
#在构建网络net对象后,执行 norm_layer_names = getprunelayer(net)
对于 YOLOv5s, normlayer_names = [‘model.0.conv.bn’, ‘model.1.bn’, ‘model.2.m.0.cv1.bn’, ‘model.2.cv3.bn’, ‘model.3.bn’, ‘model.4.m.0.cv1.bn’, ‘model.4.m.1.cv1.bn’, ‘model.4.m.2.cv1.bn’, ‘model.5.bn’, ‘model.6.m.0.cv1.bn’, ‘model.6.m.1.cv1.bn’, ‘model.6.m.2.cv1.bn’, ‘model.7.bn’, ‘model.8.cv2.bn’, ‘model.9.cv1.bn’, ‘model.9.m.0.cv1.bn’, ‘model.9.cv3.bn’, ‘model.13.cv1.bn’, ‘model.13.m.0.cv1.bn’, ‘model.13.cv3.bn’, ‘model.17.cv1.bn’, ‘model.17.m.0.cv1.bn’, ‘model.17.cv3.bn’, ‘model.20.cv1.bn’, ‘model.20.m.0.cv1.bn’, ‘model.20.cv3.bn’, ‘model.23.cv1.bn’, ‘model.23.m.0.cv1.bn’, ‘model.23.cv3.bn’]
- 在scaler.scale(loss).backward() 与 scaler.step(optimizer)之间添加
from easypruner.regularize.sparsity import update_layer_grad_decay , display_layer
from easypruner.fastpruner import getprunelayer
........
# Backward
scaler.scale(loss).backward()
#############################################################稀疏约束并观察稀疏情况
update_layer_grad_decay(model, norm_layer_names_, lr = [ optimizer.param_groups[0]['lr'] , optimizer.param_groups[2]['lr'] ] ,scaler = True , mask_dict = 0.5, epoch= epoch ,epoch_decay = int(0.75*epochs) , iters =len(pbar))
if i % 100 == 0:
display_layer(model , norm_layer_names_)
#############################################################################
# Optimize
if ni % accumulate == 0:
scaler.step(optimizer) # optimizer.step
scaler.update()
optimizer.zero_grad()
if ema:
pass
#ema.update(model)
注:值的注意的是,与进阶模式剪枝文档中不同,此处展示了update_layer_grad_decay 的另一种写法——直接传学习率进去,而不是通过传optimizer进去自动获取当前迭代轮次的学习率。两种写法都正确,直接传当前学习率进去在效率上更高一些。
步骤四. 基于步骤三的模型,执行剪枝,不需要finetune。
加载网络后,执行Order(基于阈值的剪枝方法)剪枝:
from easypruner import fastpruner
model.cpu()
fastpruner.fastpruner(model, prune_factor = 0.01, method= "Order", input_dim=[3,416,416],onnx_file = "runs/***/my_yolo.onnx") #prune_factor 为 剪枝阈值,onnx_file为转换的onnx文件。
model.to(device)
save_path = '/your_path/model_pruned.pt' #可选
torch.save(model.state_dict(),save_path) #可选