1. onnx模型转caffe模型

工具:https://github.com/MTlab/onnx2caffe
以MobileNetV2.onnx为例,执行

  1. python convertCaffe.py ./model/MobileNetV2.onnx ./model/MobileNetV2.prototxt ./model/MobileNetV2.caffemodel

此时会报如下错误:

  1. F0620 09:23:46.248489 198559 cudnn_conv_layer.cpp:53] Check failed: status == CUDNN_STATUS_SUCCESS (4 vs. 0) CUDNN_STATUS_INTERNAL_ERROR

主要是由于MobileNet depthwise层引发的,在caffe使用depthwise层prototxt中需要添加engine: CAFFE,否则会报cudnn错误。在自动转换模型时onnx2caffe中没有添加engine: CAFFE,需要自己将所有group > 1Convolution层添加该参数,如果有Deconvolution层的depthwise也要加上。举个例子

  1. layer {
  2. name: "362"
  3. type: "Convolution"
  4. bottom: "361"
  5. top: "362"
  6. convolution_param {
  7. num_output: 32
  8. bias_term: false
  9. group: 32
  10. pad_h: 1
  11. pad_w: 1
  12. kernel_h: 3
  13. kernel_w: 3
  14. stride_h: 1
  15. stride_w: 1
  16. dilation: 1
  17. engine: CAFFE
  18. }
  19. }

然后注释掉convertCaffe.py中70和71行代码

  1. #with open(prototxt_save_path, 'w') as f:
  2. #print(net,file=f)

重新执行上面的脚本即可。

ps:以下错误不一定会有
下面是网络层中存在Deconvolution层可能遇到的错误

  1. F0620 09:41:51.688936 205946 base_conv_layer.cpp:123] Check failed: num_output_ % group_ == 0 (1 vs. 0) Number of output should be multiples of group.

主要是num_output输出推导错误,修改

  1. layer {
  2. name: "508"
  3. type: "Deconvolution"
  4. bottom: "507"
  5. top: "508"
  6. convolution_param {
  7. num_output: 64 # 修改为64,onnx2caffe转的为1
  8. bias_term: false
  9. group: 64
  10. pad_h: 0
  11. pad_w: 0
  12. kernel_h: 2
  13. kernel_w: 2
  14. stride_h: 2
  15. stride_w: 2
  16. engine: CAFFE # 添加
  17. }
  18. }

2. onnx bn和卷积层融合

  1. import onnx
  2. from onnx import optimizer
  3. ori_model = onnx.load("resnet18.onnx") # 加载原始模型
  4. #all_passes = optimizer.get_available_passes() # 查看所有可以优化的项
  5. #passes = ['fuse_add_bias_into_conv', 'fuse_bn_into_conv']
  6. passes = ['fuse_bn_into_conv'] # 只将bn融合conv
  7. optim_model = optimizer.optimize(ori_model, passes)
  8. onnx.save(optim_model, "resnet18-sim.onnx")

ps: PyTorch转onnx模型不同版本可能导致上述代码执行出现问题,推荐以下转模型

  1. # pytorch <= 1.1
  2. torch.onnx.export(model, input, "resnet18.onnx", verbose=False, input_names=input_names, output_names=output_names)
  3. # pytorch >= 1.2
  4. torch.onnx.export(model, input, "resnet18.onnx", verbose=False, input_names=input_names, output_names=output_names, keep_initializers_as_inputs=True)

References

https://blog.csdn.net/m0_37192554/article/details/103363571 https://www.cnblogs.com/wanggangtao/p/11388835.html