Title

RepVGG: Making VGG-style ConvNets Great Again
代码:https://github.com/DingXiaoH/RepVGG

PS:重参数化是一个无痛涨点的大趋势~有很多神奇的应用,比如我的项目中用了sobel + conv的形式。
和RepVGG类似的文章还有ACNet

Notes

作者的博客写得很好
https://zhuanlan.zhihu.com/p/344324470

总结下来是:

  • train时的架构和inference的架构不是同一个(解耦训练时和推理时架构
  • train时使用多分支模型(类似inception,对于x,用不同的卷积核得到不同感受野的信息再相加)
  • inference时,将train时的33卷积核,11卷积核和恒等映射和其后的BN推理为一个3*3卷积核

image.png

重参数化的实现细节

这里直接用了ACNet里的图,我觉得这张图画得很清楚。
image.png

Code

网络结构

在训练时,如上图(B)所示,输出特征是由三个分支的特征相加得到的
https://github.com/DingXiaoH/RepVGG/blob/main/repvgg.py line33

  1. self.rbr_identity = nn.BatchNorm2d(num_features=in_channels) if out_channels == in_channels and stride == 1 else None
  2. self.rbr_dense = conv_bn(in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size, stride=stride, padding=padding, groups=groups)
  3. self.rbr_1x1 = conv_bn(in_channels=in_channels, out_channels=out_channels, kernel_size=1, stride=stride, padding=padding_11, groups=groups)
  4. return nn.ReLU()(self.rbr_dense(inputs) + self.rbr_1x1(inputs) + id_out)

在推断时,这三个分支的卷积会合并成一个

  1. self.rbr_reparam = nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size, stride=stride,
  2. padding=padding, dilation=dilation, groups=groups, bias=True, padding_mode=padding_mode)
  3. return nn.ReLU()(self.rbr_reparam(inputs))

接下来的重点在于三个分支的卷积参数如何合并成一个卷积

参数合并

11卷积的权重通过填0变成33卷积的权重

  1. # 1*1卷积通过填0变成3*3卷积
  2. def _pad_1x1_to_3x3_tensor(self, kernel1x1):
  3. if kernel1x1 is None:
  4. return 0
  5. else:
  6. return torch.nn.functional.pad(kernel1x1, [1,1,1,1])

将conv + bn的数据取出来变成3*3卷积的权重和bias

  1. def _fuse_bn_tensor(self, branch):
  2. if branch is None:
  3. return 0, 0
  4. if isinstance(branch, nn.Sequential):
  5. kernel = branch.conv.weight
  6. running_mean = branch.bn.running_mean
  7. running_var = branch.bn.running_var
  8. gamma = branch.bn.weight
  9. beta = branch.bn.bias
  10. eps = branch.bn.eps
  11. else:
  12. assert isinstance(branch, nn.BatchNorm2d)
  13. if not hasattr(self, 'id_tensor'):
  14. input_dim = self.in_channels // self.groups
  15. kernel_value = np.zeros((self.in_channels, input_dim, 3, 3), dtype=np.float32)
  16. for i in range(self.in_channels):
  17. kernel_value[i, i % input_dim, 1, 1] = 1
  18. self.id_tensor = torch.from_numpy(kernel_value).to(branch.weight.device)
  19. kernel = self.id_tensor
  20. running_mean = branch.running_mean
  21. running_var = branch.running_var
  22. gamma = branch.weight
  23. beta = branch.bias
  24. eps = branch.eps
  25. std = (running_var + eps).sqrt()
  26. t = (gamma / std).reshape(-1, 1, 1, 1)
  27. return kernel * t, beta - running_mean * gamma / std

先取出conv的weight(如果没有conv,就直接置0), 再取出bn的参数
新卷积核的权重:
原conv的weight (gamma/std)
新卷积核的偏置:
beta - running_mean
gamma/std

(上面两个公式还挺好理解的,把BN的inference公式写一遍就能得到)

三分支合一分支,re-parameterization

    def get_equivalent_kernel_bias(self):
        kernel3x3, bias3x3 = self._fuse_bn_tensor(self.rbr_dense)
        kernel1x1, bias1x1 = self._fuse_bn_tensor(self.rbr_1x1)
        kernelid, biasid = self._fuse_bn_tensor(self.rbr_identity)
        return kernel3x3 + self._pad_1x1_to_3x3_tensor(kernel1x1) + kernelid, bias3x3 + bias1x1 + biasid

    def switch_to_deploy(self):
        if hasattr(self, 'rbr_reparam'):
            return
        kernel, bias = self.get_equivalent_kernel_bias()
        self.rbr_reparam = nn.Conv2d(in_channels=self.rbr_dense.conv.in_channels, out_channels=self.rbr_dense.conv.out_channels,
                                     kernel_size=self.rbr_dense.conv.kernel_size, stride=self.rbr_dense.conv.stride,
                                     padding=self.rbr_dense.conv.padding, dilation=self.rbr_dense.conv.dilation, groups=self.rbr_dense.conv.groups, bias=True)
        self.rbr_reparam.weight.data = kernel
        self.rbr_reparam.bias.data = bias
        for para in self.parameters():
            para.detach_()
        self.__delattr__('rbr_dense')
        self.__delattr__('rbr_1x1')
        if hasattr(self, 'rbr_identity'):
            self.__delattr__('rbr_identity')