https://paperswithcode.com/paper/enhanced-deep-residual-networks-for-single

总观

本文提出了一种增强的超分辨率算法。通过从传统的ResNet架构中删除不必要的模块,我们在使模型紧凑的同时实现了改进的结果, 见下Residual Block。我们还使用了残余缩放系数来稳定地训练大型模型, 见下残余放缩系数。我们提出的单尺度模型超过了目前的模型,并达到了最先进的性能, EDSR见下。此外,我们还开发了一个多尺度的超分辨率网络来减小模型的大小和训练时间。利用尺度依赖的模块和共享的主网络,我们的多尺度模型可以在一个统一的框架内有效地处理各种尺度的超分辨率。虽然与一套单尺度模型相比,多尺度模型仍然很紧凑,但它表现出了与单尺度SR模型相当的性能, MDSR见下。我们提出的单尺度和多尺度模型在标准基准数据集和DIV2K数据集中都达到了榜首水平。

参数和表现 调参细节 代码


Residual blocks

image.png
上图:作者认为BN会降低灵活度和带来计算负担,所以删去了BN,甚至也删去了外部的Relu层。事实证明表现和性能的确提高了。

Single-scale model

image.png

上图:以上是作者的基准模型,使用了32个ResBlock,每个ResBlock256层卷积。
残差放缩系数:增加深度比增加宽度有效,但是深度增加会导致训练不稳定的问题,作者提出残差放缩系数来解决这个问题,即在残差相加之前乘以一个系数,见图中绿色的Mult块。

训练策略

image.png
上图:在训练X4时,用X2的参数来预训练X4模型比从头训练要快一些。

Multi-scale model

image.png
上图:作者构建了多尺度模型,该模型有80个ResBlock,每个Block有64个卷积。
详细(后续代码可见):核心部分是预处理模块和主体模块和上采样模块。预处理部分即写了X4 X3 X2的部分,这部分就是ResBlock模块的堆叠。主体部分也是ResBlock模块的堆叠,但这部分作为整体使用了残差结构。上采样使用了nn.PixelShuffle实现。另外,在mdsr中X4X3X2的不同,带了预处理部分和上采样部分的规模不同而已。

参数和表现对比

image.png
上图:相比SRResNet, 虽然EDSR和MDSR的参数还是很大的, 但是作者想强调的是MDSR的参数比EDSR小很多了。

image.png
上图:DIV2K 验证集表现
image.png
上图:基准验证集表现

训练细节

patchsize = 48X48
horizontal filps
90 rotations
optimizer = adam(0.9,0.999,1e-8)
minibatch size = 16
init_lr = 1e-4
lr_weightdecay = 每2x1e5个minibatch,对半砍lr
loss = L1

  • 最大化模型的潜能

使用集成学习,使用翻转旋转等对输入图像进行增广,对这些不同增广图像的得分求均值。

代码

common.py 此次代码常规,仅需关注注释部分。

  1. import math
  2. import torch
  3. import torch.nn as nn
  4. import torch.nn.functional as F
  5. def default_conv(in_channels, out_channels, kernel_size, bias=True):
  6. return nn.Conv2d(
  7. in_channels, out_channels, kernel_size,
  8. padding=(kernel_size//2), bias=bias)
  9. class MeanShift(nn.Conv2d):
  10. def __init__(
  11. self, rgb_range,
  12. rgb_mean=(0.4488, 0.4371, 0.4040), rgb_std=(1.0, 1.0, 1.0), sign=-1):
  13. super(MeanShift, self).__init__(3, 3, kernel_size=1)
  14. std = torch.Tensor(rgb_std)
  15. self.weight.data = torch.eye(3).view(3, 3, 1, 1) / std.view(3, 1, 1, 1)
  16. self.bias.data = sign * rgb_range * torch.Tensor(rgb_mean) / std
  17. for p in self.parameters():
  18. p.requires_grad = False
  19. class BasicBlock(nn.Sequential):
  20. def __init__(
  21. self, conv, in_channels, out_channels, kernel_size, stride=1, bias=False,
  22. bn=True, act=nn.ReLU(True)):
  23. m = [conv(in_channels, out_channels, kernel_size, bias=bias)]
  24. if bn:
  25. m.append(nn.BatchNorm2d(out_channels))
  26. if act is not None:
  27. m.append(act)
  28. super(BasicBlock, self).__init__(*m)
  29. class ResBlock(nn.Module):
  30. def __init__(
  31. self, conv, n_feats, kernel_size,
  32. bias=True, bn=False, act=nn.ReLU(True), res_scale=1):
  33. # NOTE: res_scale=1 残差放缩系数,见下。
  34. super(ResBlock, self).__init__()
  35. m = []
  36. for i in range(2):
  37. m.append(conv(n_feats, n_feats, kernel_size, bias=bias))
  38. if bn:
  39. m.append(nn.BatchNorm2d(n_feats))
  40. if i == 0:
  41. m.append(act)
  42. self.body = nn.Sequential(*m)
  43. self.res_scale = res_scale
  44. def forward(self, x):
  45. res = self.body(x).mul(self.res_scale) # NOTE
  46. res += x
  47. return res
  48. class Upsampler(nn.Sequential):
  49. def __init__(self, conv, scale, n_feats, bn=False, act=False, bias=True):
  50. m = []
  51. if (scale & (scale - 1)) == 0: # Is scale = 2^n?
  52. for _ in range(int(math.log(scale, 2))):
  53. m.append(conv(n_feats, 4 * n_feats, 3, bias))
  54. m.append(nn.PixelShuffle(2))
  55. if bn:
  56. m.append(nn.BatchNorm2d(n_feats))
  57. if act == 'relu':
  58. m.append(nn.ReLU(True))
  59. elif act == 'prelu':
  60. m.append(nn.PReLU(n_feats))
  61. elif scale == 3:
  62. m.append(conv(n_feats, 9 * n_feats, 3, bias))
  63. m.append(nn.PixelShuffle(3))
  64. if bn:
  65. m.append(nn.BatchNorm2d(n_feats))
  66. if act == 'relu':
  67. m.append(nn.ReLU(True))
  68. elif act == 'prelu':
  69. m.append(nn.PReLU(n_feats))
  70. else:
  71. raise NotImplementedError
  72. super(Upsampler, self).__init__(*m)

edsr.py

  1. from model import common
  2. import torch.nn as nn
  3. url = {
  4. 'r16f64x2': 'https://cv.snu.ac.kr/research/EDSR/models/edsr_baseline_x2-1bc95232.pt',
  5. 'r16f64x3': 'https://cv.snu.ac.kr/research/EDSR/models/edsr_baseline_x3-abf2a44e.pt',
  6. 'r16f64x4': 'https://cv.snu.ac.kr/research/EDSR/models/edsr_baseline_x4-6b446fab.pt',
  7. 'r32f256x2': 'https://cv.snu.ac.kr/research/EDSR/models/edsr_x2-0edfb8a3.pt',
  8. 'r32f256x3': 'https://cv.snu.ac.kr/research/EDSR/models/edsr_x3-ea3ef2c6.pt',
  9. 'r32f256x4': 'https://cv.snu.ac.kr/research/EDSR/models/edsr_x4-4f62e9ef.pt'
  10. }
  11. def make_model(args, parent=False):
  12. return EDSR(args)
  13. class EDSR(nn.Module):
  14. def __init__(self, args, conv=common.default_conv):
  15. super(EDSR, self).__init__()
  16. n_resblocks = args.n_resblocks
  17. n_feats = args.n_feats
  18. kernel_size = 3
  19. scale = args.scale[0]
  20. act = nn.ReLU(True)
  21. url_name = 'r{}f{}x{}'.format(n_resblocks, n_feats, scale)
  22. if url_name in url:
  23. self.url = url[url_name]
  24. else:
  25. self.url = None
  26. self.sub_mean = common.MeanShift(args.rgb_range)
  27. self.add_mean = common.MeanShift(args.rgb_range, sign=1)
  28. # define head module
  29. m_head = [conv(args.n_colors, n_feats, kernel_size)]
  30. # define body module
  31. m_body = [
  32. common.ResBlock(
  33. conv, n_feats, kernel_size, act=act, res_scale=args.res_scale
  34. ) for _ in range(n_resblocks)
  35. ]
  36. m_body.append(conv(n_feats, n_feats, kernel_size))
  37. # define tail module
  38. m_tail = [
  39. common.Upsampler(conv, scale, n_feats, act=False),
  40. conv(n_feats, args.n_colors, kernel_size)
  41. ]
  42. self.head = nn.Sequential(*m_head)
  43. self.body = nn.Sequential(*m_body)
  44. self.tail = nn.Sequential(*m_tail)
  45. def forward(self, x):
  46. x = self.sub_mean(x)
  47. x = self.head(x)
  48. res = self.body(x)
  49. res += x
  50. x = self.tail(res)
  51. x = self.add_mean(x)
  52. return x
  53. def load_state_dict(self, state_dict, strict=True):
  54. own_state = self.state_dict()
  55. for name, param in state_dict.items():
  56. if name in own_state:
  57. if isinstance(param, nn.Parameter):
  58. param = param.data
  59. try:
  60. own_state[name].copy_(param)
  61. except Exception:
  62. if name.find('tail') == -1:
  63. raise RuntimeError('While copying the parameter named {}, '
  64. 'whose dimensions in the model are {} and '
  65. 'whose dimensions in the checkpoint are {}.'
  66. .format(name, own_state[name].size(), param.size()))
  67. elif strict:
  68. if name.find('tail') == -1:
  69. raise KeyError('unexpected key "{}" in state_dict'
  70. .format(name))

mdsr.py
核心代码,为不同scale_factor设置不同规模的预处理模块和上采样模块。注:这里的预处理模块的规模也就是ResBlock的堆叠数量。

  1. from model import common
  2. import torch.nn as nn
  3. url = {
  4. 'r16f64': 'https://cv.snu.ac.kr/research/EDSR/models/mdsr_baseline-a00cab12.pt',
  5. 'r80f64': 'https://cv.snu.ac.kr/research/EDSR/models/mdsr-4a78bedf.pt'
  6. }
  7. def make_model(args, parent=False):
  8. return MDSR(args)
  9. class MDSR(nn.Module):
  10. def __init__(self, args, conv=common.default_conv):
  11. super(MDSR, self).__init__()
  12. n_resblocks = args.n_resblocks
  13. n_feats = args.n_feats
  14. kernel_size = 3
  15. act = nn.ReLU(True)
  16. self.scale_idx = 0
  17. self.url = url['r{}f{}'.format(n_resblocks, n_feats)]
  18. self.sub_mean = common.MeanShift(args.rgb_range)
  19. self.add_mean = common.MeanShift(args.rgb_range, sign=1)
  20. m_head = [conv(args.n_colors, n_feats, kernel_size)]
  21. self.pre_process = nn.ModuleList([
  22. nn.Sequential(
  23. common.ResBlock(conv, n_feats, 5, act=act),
  24. common.ResBlock(conv, n_feats, 5, act=act)
  25. ) for _ in args.scale
  26. ])
  27. m_body = [
  28. common.ResBlock(
  29. conv, n_feats, kernel_size, act=act
  30. ) for _ in range(n_resblocks)
  31. ]
  32. m_body.append(conv(n_feats, n_feats, kernel_size))
  33. self.upsample = nn.ModuleList([
  34. common.Upsampler(conv, s, n_feats, act=False) for s in args.scale
  35. ])
  36. m_tail = [conv(n_feats, args.n_colors, kernel_size)]
  37. self.head = nn.Sequential(*m_head)
  38. self.body = nn.Sequential(*m_body)
  39. self.tail = nn.Sequential(*m_tail)
  40. def forward(self, x):
  41. x = self.sub_mean(x)
  42. x = self.head(x)
  43. x = self.pre_process[self.scale_idx](x) # NOTE
  44. res = self.body(x)
  45. res += x
  46. x = self.upsample[self.scale_idx](res) # NOTE
  47. x = self.tail(x)
  48. x = self.add_mean(x)
  49. return x
  50. def set_scale(self, scale_idx):
  51. self.scale_idx = scale_idx